Driven to Distraction – the future of car safety

If you haven’t gotten a new car in a while you may not have noticed that the future of the dashboard looks like this:


That’s it. A single screen replacing all the dashboard gauges, knobs and switches. But behind that screen is an increasing level of automation that hides a ton of complexity.

At times everything you need is on the screen with a glance. At other times you have to page through menus and poke at the screen while driving. And while driving at 70mph, try to understand if you or your automated driving system is in control of your car. All while figuring out how to use any of the new features, menus or rearranged user interface that might have been updated overnight.

In the beginning of any technology revolution the technology gets ahead of the institutions designed to measure and regulate safety and standards. Both the vehicle’s designers and regulators will eventually catch up, but in the meantime we’re on the steep part of a learning curve – part of a million-person beta test – about what’s the right driver-to-vehicle interface.

We went through this with airplanes. And we’re reliving that transition in cars. Things will break, but in a few decades we’ll come out out the other side, look back and wonder how people ever drove any other way.

Here’s how we got here, what it’s going to cost us, and where we’ll end up.


Cars, Computers and Safety
Two massive changes are occurring in automobiles: 1) the transition from internal combustion engines to electric, and 2) the introduction of automated driving.

But a third equally important change that’s also underway is the (r)evolution of car dashboards from dials and buttons to computer screens. For the first 100 years cars were essentially a mechanical platform – an internal combustion engine and transmission with seats – controlled by mechanical steering, accelerator and brakes. Instrumentation to monitor the car was made up of dials and gauges; a speedometer, tachometer, and fuel, water and battery gauges.
By the 1970’s driving became easier as automatic transmissions replaced manual gear shifting and hydraulically assisted steering and brakes became standard. Comfort features evolved as well: climate control – first heat, later air-conditioning; and entertainment – AM radio, FM radio, 8-track tape, CD’s, and today streaming media. In the last decade GPS-driven navigation systems began to appear.

Safety
At the same time cars were improving, automobile companies fought safety improvements tooth and nail. By the 1970’s auto deaths in the U.S averaged 50,000 a year. Over 3.7 million people have died in cars in the U.S. since they appeared – more than all U.S. war deaths combined. (This puts auto companies in the rarified class of companies – along with tobacco companies – that have killed millions of their own customers.) Car companies argued that talking safety would scare off customers, or that the added cost of safety features would put them in a competitive price disadvantage. But in reality, style was valued over safety.

Safety systems in automobiles have gone through three generations – passive systems and two generations of active systems. Today we’re about to enter a fourth generation – autonomous systems.

Passive safety systems are features that protect the occupants after a crash has occurred. They started appearing in cars in the 1930’s. Safety glass in windshields appeared in the 1930’s in response to horrific disfiguring crashes. Padded dashboards were added in the 1950’s but it took Ralph Nader’s book, Unsafe at Any Speedto spur federally mandated passive safety features in the U.S. beginning in the 1960’s: seat belts, crumple zones, collapsible steering wheels, four-way flashers and even better windshields. The Department of Transportation was created in 1966 but it wasn’t until 1979 that the National Highway Traffic Safety Administration (NHTSA) started crash-testing cars (the Insurance Institute for Highway Safety started their testing in 1995). In 1984 New York State mandated seat belt use (now required in 49 of the 50 states.)

These passive safety features started to pay off in the mid-1970’s as overall auto deaths in the U.S. began to decline.

Active safety systems try to prevent crashes before they happen. These depended on the invention of low-cost, automotive-grade computers and sensors. For example, accelerometers-on-a-chip made airbags possible as they were able to detect a crash in progress. These began to appear in cars in the late 1980’s/1990’s and were required in 1998. In the 1990’s computers capable of real-time analysis of wheel sensors (position and slip) made ABS (anti-lock braking systems) possible. This feature was finally required in 2013.

Since 2005 a second generation of active safety features have appeared. They run in the background and constantly monitor the vehicle and space around it for potential hazards. They include: Electronic Stability Control, Blind Spot Detection, Forward Collision Warning, Lane Departure Warning, Rearview Video Systems, Automatic Emergency Braking, Pedestrian Automatic Emergency Braking, Rear Automatic Emergency Braking, Rear Cross Traffic Alert and Lane Centering Assist.

Autonomous Cars
Today, a fourth wave of safety features is appearing as Autonomous/Self-Driving features. These include Lane Centering/Auto Steer, Adaptive cruise control, Traffic jam assist, Self-parking, full self-driving. The National Highway Traffic Safety Administration (NHTSA) has adopted the six-level SAE standard to describe these vehicle automation features:

Getting above Level 2 is a really hard technical problem and has been discussed ad infinitum in other places. But what hasn’t got much attention is how drivers interact with these systems as the level of automation increases, and as the driving role shifts from the driver to the vehicle. Today, we don’t know whether there are times these features make cars less safe rather than more.

For example, Tesla and other cars have Level 2 and some Level 3 auto-driving features. Under Level 2 automation, drivers are supposed to monitor the automated driving because the system can hand back control of the car to you with little or no warning. In Level 3 automation drivers are not expected to monitor the environment, but again they are expected to be prepared to take control of the vehicle at all times, this time with notice.

Research suggests that drivers, when they aren’t actively controlling the vehicle, may be reading their phone, eating, looking at the scenery, etc. We really don’t know how drivers will perform in Level 2 and 3 automation. Drivers can lose situational awareness when they’re surprised by the behavior of the automation – asking: What is it doing now? Why did it do that? Or, what is it going to do next? There are open questions as to whether drivers can attain/sustain sufficient attention to take control before they hit something. (Trust me, at highway speeds having a “take over immediately” symbol pop up while you are gazing at the scenery raises your blood pressure, and hopefully your reaction time.)If these technical challenges weren’t enough for drivers to manage, these autonomous driving features are appearing at the same time that car dashboards are becoming computer displays.

We never had cars that worked like this. Not only will users have to get used to dashboards that are now computer displays, they are going to have understand the subtle differences between automated and semi-automated features and do so as auto makers are developing and constantly updating them. They may not have much help mastering the changes. Most users don’t read the manual, and, in some cars, the manuals aren’t even keeping up with the new features.

But while we never had cars that worked like this, we already have planes that do.
Let’s see what we’ve learned in 100 years of designing controls and automation for aircraft cockpits and pilots, and what it might mean for cars.

Aircraft Cockpits
Airplanes have gone through multiple generations of aircraft and cockpit automation. But unlike cars which are just first seeing automated systems, automation was first introduced in airplanes during the 1920s and 1930s.

For their first 35 years airplane cockpits, much like early car dashboards, were simple – a few mechanical instruments for speed, altitude, relative heading and fuel. By the late 1930’s the British Royal Air Force (RAF) standardized on a set of flight instruments. Over the next decade this evolved into the “Basic T” instrument layout – the de facto standard of how aircraft flight instruments were laid out.

Engine instruments were added to measure the health of the aircraft engines – fuel and oil quantity, pressure, and temperature and engine speed.

Next, as airplanes became bigger, and the aerodynamic forces increased, it became difficult to manually move the control surfaces so pneumatic or hydraulic motors were added to increase the pilots’ physical force. Mechanical devices like yaw dampers and Mach trim compensators corrected the behavior of the plane.

Over time, navigation instruments were added to cockpits. At first, they were simple autopilots to just keep the plane straight and level and on a compass course. The next addition was a radio receiver to pick up signals from navigation stations. This was so pilots could set the desired bearing to the ground station into a course deviation display, and the autopilot would fly the displayed course.

In the 1960s, electrical systems began to replace the mechanical systems:

  • electric gyroscopes (INS) and autopilots using VOR (Very High Frequency Omni-directional Range) radio beacons to follow a track
  • auto-throttle – to manage engine power in order to maintain a selected speed
  • flight director displays – to show pilots how to fly the aircraft to achieve a preselected speed and flight path
  • weather radars – to see and avoid storms
  • Instrument Landing Systems – to help automate landings by giving the aircraft horizontal and vertical guidance.

By 1960 a modern jet cockpit (the Boeing 707) looked like this:

While it might look complicated, each of the aircraft instruments displayed a single piece of data. Switches and knobs were all electromechanical.

Enter the Glass Cockpit and Autonomous Flying
Fast forward to today and the third generation of aircraft automation. Today’s aircraft might look similar from the outside but on the inside four things are radically different:

  1. The clutter of instruments in the cockpit has been replaced with color displays creating a “glass cockpit”
  2. The airplanes engines got their own dedicated computer systems – FADEC (Full Authority Digital Engine Control) – to autonomously control the engines
  3. The engines themselves are an order of magnitude more reliable
  4. Navigation systems have turned into full-blown autonomous flight management systems

So today a modern airplane cockpit (an Airbus 320) looks like this:

Today, airplane navigation is a real-world example of autonomous driving – in the sky. Two additional systems, the Terrain Awareness and Warning Systems (TAWS) and Traffic Condition Avoidance System (TCAS) gave pilots a view of what’s underneath and around them dramatically increasing pilots’ situation awareness and flight safety. (Autonomy in the air is technically a much simpler problem because in the cruise portion of flight there are a lot less things to worry about in the air than in a car.)

Navigation in planes has turned into autonomous “flight management.” Instead of a course deviation dial, navigation information is now presented as a “moving map” on a display showing the position of navigation waypoints, by latitude and longitude. The position of the airplane no longer uses ground radio stations, but rather is determined by Global Positioning System (GPS) satellites or autonomous inertial reference units. The route of flight is pre-programmed by the pilot (or uploaded automatically) and the pilot can connect the autopilot to autonomously fly the displayed route. Pilots enter navigation data into the Flight Management System, with a keyboard. The flight management system also automates vertical and lateral navigation, fuel and balance optimization, throttle settings, critical speed calculation and execution of take-offs and landings.

Automating the airplane cockpit relieved pilots from repetitive tasks and allowed less skilled pilots to fly safely. Commercial airline safety dramatically increased as the commercial jet airline fleet quadrupled in size from ~5,000 in 1980 to over 20,000 today. (Most passengers today would be surprised to find out how much of their flight was flown by the autopilot versus the pilot.)

Why Cars Are Like Airplanes
And here lies the connection between what’s happened to airplanes with what is about to happen to cars.

The downside of glass cockpits and cockpit automation means that pilots no longer actively operating the aircraft but instead monitor it. And humans are particularly poor at monitoring for long periods. Pilots have lost basic manual and cognitive flying skills because of a lack of practice and feel for the aircraft. In addition, the need to “manage” the automation, particularly when involving data entry or retrieval through a key-pad, increased rather than decreased the pilot workload. And when systems fail, poorly designed user interfaces reduce a pilot’s situational awareness and can create cognitive overload.

Today, pilot errors — not mechanical failures– cause at least 70-80% of commercial airplane accidents. The FAA and NTSB have been analyzing crashes and have been writing extensively on how flight deck automation is affecting pilots. (Crashes like Asiana 214 happened when pilots selected the wrong mode on a computer screen.) The FAA has written the definitive document how people and automated systems ought to interact.

In the meantime, the National Highway Traffic Safety Administration (NHTSA) has found that 94% of car crashes are due to human error – bad choices drivers make such as inattention, distraction, driving too fast, poor judgment/performance, drunk driving, lack of sleep.

NHTSA has begun to investigate how people will interact with both displays and automation in cars. They’re beginning to figure out:

  • What’s the right way to design a driver-to-vehicle interface on a screen to show:
    • Vehicle status gauges and knobs (speedometer, fuel/range, time, climate control)
    • Navigation maps and controls
    • Media/entertainment systems
  • How do you design for situation awareness?
    • What’s the best driver-to-vehicle interface to display the state of vehicle automation and Autonomous/Self-Driving features?
    • How do you manage the information available to understand what’s currently happening and project what will happen next?
  • What’s the right level of cognitive load when designing interfaces for decisions that have to be made in milliseconds?
    • What’s the distraction level from mobile devices? For example, how does your car handle your phone? Is it integrated into the system or do you have to fumble to use it?
  • How do you design a user interface for millions of users whose age may span from 16-90; with different eyesight, reaction time, and ability to learn new screen layouts and features?

Some of their findings are in the document Human-centric design guidance for driver-vehicle interfaces. But what’s striking is that very little of the NHSTA documents reference the decades of expensive lessons that the aircraft industry has learned. Glass cockpits and aircraft autonomy have traveled this road before. Even though aviation safety lessons have to be tuned to the different reaction times needed in cars (airplanes fly 10 times faster, yet pilots often have seconds or minutes to respond to problems, while in a car the decisions often have to be made in milliseconds) there’s a lot they can learn together. Aviation has gone 9 years in the U.S. with just one fatality, yet in 2017 37,000 people died in car crashes in the U.S.

There Are No Safety Ratings for Your Car As You Drive
In the U.S. aircraft safety has been proactive. Since 1927 new types aircraft (and each sub-assembly) are required to get a type approval from the FAA before it can be sold and be issued an Airworthiness Certificate.

Unlike aircraft, car safety in the U.S. has been reactive. New models don’t require a type approval, instead each car company self-certifies that their car meets federal safety standards. NHTSA waits until a defect has emerged and then can issue a recall.

If you want to know how safe your model of car will be during a crash, you can look at the National Highway Traffic Safety Administration (NHTSA) New Car Assessment Program (NCAP) crash-tests, or the Insurance Institute for Highway Safety (IIHS) safety ratings. Both summarize how well the active and passive safety systems will perform in frontal, side, and rollover crashes. But today, there are no equivalent ratings for how safe cars are while you’re driving them. What is considered a good vs. bad user interface and do they have different crash rates? Does the transition from Level 1, 2 and 3 autonomy confuse drivers to the point of causing crashes? How do you measure and test these systems? What’s the role of regulators in doing so?

Given the NHTSA and the FAA are both in the Department of Transportation (DoT), It makes you wonder whether these government agencies actively talk to and collaborate with each other and have integrated programs and common best practices. And whether they have extracted best practices from the NTSB. And from the early efforts of Tesla, Audi, Volvo, BMW, etc., it’s not clear they’ve looked at the airplane lessons either.

It seems like the logical thing for NHTSA to do during this autonomous transition is 1) start defining “best practices” in U/I and automation safety interfaces and 2) to test Level 2-4 cars for safety while you drive (like the crash tests but for situational awareness, cognitive load, etc. in a set of driving scenarios. (There are great university programs already doing that research.)

However, the DoT’s Automated Vehicles 3.0 plan moves the agency further from owning the role of “best practices” in U/I and automation safety interfaces. It assumes that car companies will do a good job self-certifying these new technologies. And has no plans for safety testing and rating these new Level 2-4 autonomous features.

(Keep in mind that publishing best practices and testing for autonomous safety features is not the same as imposing regulations to slow down innovation.)

It looks like it might take an independent agency like the SAE to propose some best practices and ratings. (Or the slim possibility that the auto industry comes together and set defacto standards.)

The Chaotic Transition
It took 30 years, from 1900 to 1930, to transition from horses and buggies in city streets to automobiles dominating traffic. During that time former buggy drivers had to learn a completely new set of rules to control their cars. And the roads in those 30 years were a mix of traffic – it was chaotic.
In New York City the tipping point was 1908 when the number of cars passed the number of horses. The last horse-drawn trolley left the streets of New York in 1917. (It took another decade or two to displace the horse from farms, public transport and wagon delivery systems.) Today, we’re about to undergo the same transition.

Cars are on the path for full autonomy, but we’re seeing two different approaches on how to achieve Level 4 and 5 “hands off” driverless cars. Existing car manufacturers, locked into the existing car designs, are approaching this step-wise – adding additional levels of autonomy over time – with new models or updates; while new car startups (Waymo, Zoox, Cruise, etc.) are attempting to go right to Level 4 and 5.

We’re going to have 20 or so years with the roads full of a mix of millions of cars – some being manually driven, some with Level 2 and 3 driver assistance features, and others autonomous vehicles with “hands-off” Level 4 and 5 autonomy. It may take at least 20 years before autonomous vehicles become the dominant platforms. In the meantime, this mix of traffic is going to be chaotic. (Some suggest that during this transition we require autonomous vehicles to have signs in their rear window, like student drivers, but this time saying, “Caution AI on board.”)

As there will be no government best practices for U/I or scores for autonomy safety, learning and discovery will be happening on the road. That makes the ability for car companies to have over-the-air updates for both the dashboard user interface and the automated driving features essential. Incremental and iterative updates will add new features, while fixing bad ones. Engaging customers to make them realize they’re part of the journey will ultimately make this a successful experiment.

My bet is much like when airplanes went to glass cockpits with increasingly automated systems, we’ll create new ways drivers crash their cars, while ultimately increasing overall vehicle safety.

But in the next decade or two, with the government telling car companies “roll your own”, it’s going to be one heck of a ride.

Lessons Learned

  • There’s a (r)evolution as car dashboards move from dials and buttons to computer screens and the introduction of automated driving
    • Computer screens and autonomy will both create new problems for drivers
    • There are no standards to measure the safety of these systems
    • There are no standards for how information is presented
  • Aircraft cockpits are 10 to 20 years ahead of car companies in studying and solving this problem
    • Car and aircraft regulators need to share their learnings
    • Car companies can reduce crashes and deaths if they look to aircraft cockpit design for car user interface lessons
  • The Department of Transportation has removed barriers to the rapid adoption of autonomous vehicles
    • Car companies “self-certify” whether their U/I and autonomy are safe
    • There are no equivalents of crash safety scores for driving safety with autonomous features
  • Over-the-air updates for car software will become essential
    • But the downside is they could dramatically change the U/I without warning
  • On the path for full autonomy we’ll have three generations of cars on the road
    • The transition will be chaotic, so hang on it’s going to a bumpy ride, but the destination – safety for everyone on the road – will be worth it

The Red Queen Problem – Innovation in the DoD and Intelligence Community

“…it takes all the running you can do to keep in the same place. ”
The Red Queen Alice in Wonderland

Innovation, disruption, accelerators, have all become urgent buzzwords in the Department of Defense and Intelligence community. They are a reaction to the “red queen problem” but aren’t actually solving the problem. Here’s why.


In the 20th century our nation faced a single adversary – the Soviet Union. During the Cold War the threat from the Soviets was quantifiable and often predictable. We could specify requirements, budget and acquire weapons based on a known foe. We could design warfighting tactics based on knowing the tactics of our opponent. Our defense department and intelligence community owned proprietary advanced tools and technology. We and our contractors had the best technology domain experts. We could design and manufacture the best systems. We used these tools to keep pace with the Soviet threats and eventually used silicon, semiconductors and stealth to create an offset strategy to leapfrog their military.

That approach doesn’t work anymore. In the 21st century you need a scorecard to keep track of the threats: Russia, China, North Korea, Iran, ISIS in Yemen/Libya/Philippines, Taliban, Al-Qaeda, hackers for hire, etc. Some are strategic peers, some are near peers in specific areas, some are threats as non-state disrupters operating with no rules.

In addition to the proliferation of threats, most of the tools and technologies that were uniquely held by the DoD/IC or only within the reach of large nation states are now commercially available (Cyber, GPS, semiconductors, analytics, centrifuges, drones, genetic engineering, agile and lean methodologies, ubiquitous Internet, crypto and smartphones, etc.). In most industries, manufacturing is no longer a core competence of the U.S.

U.S. agencies that historically owned technology superiority and fielded cutting-edge technologies now find that off-the-shelf solutions may be more advanced than the solutions they are working on, or that adversaries can rapidly create asymmetric responses using these readily available technologies.

The result is that our systems, organizations, headcount and budget – designed for 20th century weapons procurements and warfighting tactics on a predictable basis – can’t scale to meet all these simultaneous and unpredictable challenges. Today, our DoD and national security agencies are running as hard as they can just to stay in place, but our adversaries are continually innovating faster than our traditional systems can respond. They have gotten inside our OODA loop (Observe, Orient, Decide and Act).

We believe that continuous disruption can only be met with a commitment to continuous innovation.

Pete Newell and I have spent a lot of time bringing continuous innovation to government organizations. Newell ran the U.S. Army’s Rapid Equipping Force on the battlefields of Iraq and Afghanistan finding and deploying technology solutions against agile insurgents. He’s spent the last four years in Silicon Valley out of uniform continuing that work. I’ve spent the last six years teaching our country’s scientists how to rapidly turn scientific breakthroughs into deliverable products by creating the curriculum for the National Science Foundation Innovation Corps – now taught in 53 universities. Together Pete, Joe Felter and I created Hacking for Defense, a nationwide program to teach university students how use Lean methodologies to solve defense and national security problems.

The solution to continuous disruption requires new ways to think about, organize, and build and deploy national security people, organizations and solutions.

Here are our thoughts about how to confront the Red Queen trap and adapt a government agency to infuse continuous innovation in its culture and practices.

Problem 1: Regardless of a high-level understanding that business as usual can’t go on, all agencies are given “guidance and metrics (what they are supposed to do (their “mission”) and how they are supposed to measure success). To no one’s surprise the guidance is “business as usual but more of it.” And to fulfill that guidance agencies create structure (divisions, directorates, etc.) designed to execute repeatable processes and procedures to deliver solutions that meet the requirements of the overall guidance.

Inevitably, while all of our defense and national security agencies will tell you that innovation is one of their pillars, innovation actually is an ill-defined and amorphous aspirational goal, while the people, budget and organization continue to flow to execution of mission (as per guidance.)

There is no guidance or acknowledgement that in our national security agencies, even as we execute our current mission, our capabilities decline every year due to security breaches, technology timing out, tradecraft obsolescence, etc. And there is no explicit requirement for creation of new capabilities that give us the advantage.

Solution 1: Extend agency guidance to include the requirements to create a continuous innovation process that a) resupplies the continual attrition of capabilities and b) creates new capabilities that gives us a mission advantage. The result will be agency leadership creating new organizational structures that make innovation a continual process rather than an ad hoc series of heroic efforts.

Problem 2: The word “Innovation” actually describes three very different types of activities.

Solution 2: Use the McKinsey Three Horizons Model to differentiate among the three types. Horizon 1 ideas provide continuous innovation to a company’s existing mission model and core capabilities. Horizon 2 ideas extend a company’s existing mission model and core capabilities to new stakeholders, customers, or targets. Horizon 3 is the creation of new capabilities to take advantage of or respond to disruptive technologies/opportunities or to counter disruption.

We’d add a new category, Horizon 0, which kills ideas that are not viable or feasible (something that Silicon Valley is tremendously efficient at doing).

These Horizons also apply to government agencies and other large organizations. Agencies and commands need to support all three horizons.

Problem 3: Risk equals failure and failure is to be avoided as it indicates a lack of competence.

Solution 3: The three-horizon model allows everyone to understand that failure in a Horizon 1/existing mission activity is different than failure in a Horizon 3 “never been done before” activity. We want to take risks in Horizon 3. If we aren’t failing with some efforts, we aren’t trying hard enough. An innovation process embraces and understands the different types of failure and risk.

Problem 4: Innovators tend to create activities rather than deployable solutions that can be used on the battlefield or by the mission. Accelerators, hubs, cafes, open-sourcing, crowd-souring, maker spaces, Chief Innovation Officers, etc. are all great but they tend to create innovation theater – lots of motion but no action. Great demos are shown and there are lots of coffee cups and posters, but if you look at the deliverables for the mission over a period of years the result is disappointing. Most of the executors and operators have seen little or no value from any of these activities. While the activities individually may produce things of value, they aren’t valued within the communities they serve because they aren’t connected to a complete pipeline that harnesses that value and turns it into a deliverable on the battlefield where it matters.

Solution 4: What we have been missing is an innovation pipeline focused on deployment not demos.

The Lean Innovation process is a self-regulating, evidence-based innovation pipeline. It is a process that operates with speed and urgency, where innovators and stakeholders curate and prioritize their own problems/Challenges/ideas/technology. It is evidence based, data driven, accountable, disciplined, rapid and mission- and deployment-focused.

The process recognizes that Innovation isn’t a single activity (an incubator, a class, etc.) it is a process from start to deployment.
The canonical innovation pipeline:

As you see in the diagram, there are 6 steps to the innovation pipeline: sourcing, challenge/curation, prioritization, solution exploration and hypothesis testing, incubation and integration.

Innovation sourcing: a list of problems/challenges, ideas, and technologies that might be worth investing in. These can come from hackathons, research groups, needs from operators in the field, etc.

Challenge/Curation: innovators get out of their own offices and talk to colleagues and customers with the goal of finding other places in the DoD where a problem or challenge might exist in a slightly different form, to identify related internal projects already in existence, and to find commercially available solutions to problems. It also seeks to identify legal issues, security issues, and support issues.

This process also helps identify who the customers for possible solutions would be, who the internal stakeholders would be, and even what initial minimum viable products might look like.

This phase also includes building initial minimal viable products (MVPs.) Some ideas drop out when the team recognizes that they may be technically, financially, or legally unfeasible or they may discover that other groups have already built a similar product.

Prioritization: Once a list of innovation ideas has been refined by curation, it needs to be prioritized using the McKinsey Three Horizons Model.

Once projects have been classified, the team prioritizes them, starting by asking: is this project worth pursing for another few months full time? This prioritization is not done by a committee of executives but by the innovation teams themselves.

Solution exploration and hypotheses testing: The ideas that pass through the prioritization filter enter an incubation process like Hacking for Defense/I-Corps, the system adopted by all U.S. government federal research agencies to turn ideas into products.

This six- to ten-week process delivers evidence for defensible, data-based decisions. For each idea, the innovation team fills out a mission model canvas. Everything on that canvas is a hypothesis. This not only includes the obvious – is there solution/mission fit? — but the other “gotchas” that innovators always seem to forget. The framework has the team talking not just to potential customers but also with people responsible for legal, support, contracting, policy, and finance. It also requires that they think through compatibility, scalability and deployment long before this gets presented to engineering. There is now another major milestone for the team: to show compelling evidence that this project deserves to be a new mainstream capability. Alternatively, the team might decide that it should be spun into its own organization or that it should be killed.

Incubation: Once hypothesis testing is complete, many projects will still need a period of incubation as the teams championing the projects gather additional data about the application, further build the minimum viable product (MVP), and get used to working together. Incubation requires dedicated leadership oversight from the horizon 1 organization to insure the fledgling project does not die of malnutrition (a lack of access to resources) or become an orphan (continue to work with no parent to guide them).

Integration and refactoring: At this point, if the innovation is Horizon 1 or 2, its time to integrate it into the existing organization. (Horizon 3 innovations are more likely set up as their own entities or at least divisions.) Trying to integrate new, unbudgeted, and unscheduled innovation projects into an engineering organization that has line item budgets for people and resources results in chaos and frustration. In addition, innovation projects carry both technical and organizational debt. This creates an impedance mismatch between the organizations that can be easily be resolved with a small dedicated refactoring team. Innovation then becomes a continuous cycle rather than a bottleneck.

Problem 5: The question being asked across the Department of Defense and national security community is, “Can we innovate like startups in Silicon Valley” and insert speed, urgency and agility into our work?

Solution 5: The reality is that the DoD/IC is not Silicon Valley. In fact, it’s much more like a large company with existing customers, existing products and the organizations built to support and service them. And much like large companies they are being disrupted by forces outside their control.

But what’s unique is, that unlike a large company that doesn’t know how to move rapidly, on the battlefields of Iraq and Afghanistan our combatant commands and national security community were more agile, creative and Lean than any startup. They wrote the book on how to collaborate (read Team of Teams) or adopt new technologies (see the Rapid Equipping Force.) The problem isn’t that these agencies and commands don’t know how to be innovative. The problem is they don’t know how to be innovative in peacetime when innovation succumbs to the daily demands of execution. Part of the reason is that large agencies are run by leaders who tend to be excellent Horizon 1 managers of existing people, process and resources but have no experience in building and leading Horizon 3 organizations.

The solution is to understand that an innovation pipeline requires different people, processes, procedures, and metrics, then execution.

Problem 6: How to get started? How to get leadership behind continuous innovation?

Solution 6: To leadership, incubators, cafes, accelerators and hackathons appear to be just background noise unrelated to their guidance and mission. Part of the problem lies with the innovators themselves. Lots of innovation activities celebrate the creation of demos, funding, new makerspaces, etc. but there is little accountability for the actual rapid deployment of useful tools. Once we can convince and demonstrate to leadership that continuous innovation can solve the Red Queen problem, we’ll have their attention and support.

We know how to do this. Our country requires it.
Let’s get started.

Lessons Learned

  • Organizations must constantly adapt and evolve, to survive when pitted against ever-evolving opposition in an ever-changing environment
  • Government agencies need to both innovate and execute
  • In peacetime innovation succumbs to the demands of execution
  • We need explicit guidance for innovation to agencies and their leadership requiring an innovation organization and process, that operates in parallel with the execution of current mission
  • We need an innovation pipeline that delivers rapid results, not separate, disconnected innovation activities

National Security Innovation just got a major boost in Washington

Two good things just happened in Washington – these days that should be enough of a headline.

First, someone ideal was just appointed to be Deputy Assistant Secretary of Defense.

Second, funding to teach our Hacking for Defense class across the country just was added to the National Defense Authorization Act.

Interestingly enough, both events are about how the best and brightest can serve their country – and are testament to the work of two dedicated men.

Soldier, Scholar, Entrepreneur
Joe Felter was just appointed Deputy Assistant Secretary of Defense for South and Southeast Asia. As a result, our country just became a bit safer and smarter. That’s because Joe brings a wealth of real-world experience and leadership to the role.

I got lucky to know and teach with Joe at Stanford. When we met, my first impression was that of a very smart and pragmatic academic. And I also noticed that there was always a cloud of talented grad students who wanted to follow him. (I learned later I was watching one of the qualities of a great leader.) Joe had appointments at Stanford’s Center for International Security and Cooperation (CISAC), where he was the co-director of the Empirical Studies of Conflict Project and at the Hoover Institute where he was a research fellow. I learned he’d gone to Harvard to get his MPA at the Kennedy School of Government in conflict resolution. But the thing that really caught my attention: his Stanford Ph.D thesis in Political Science had the world’s best title: “Taking Guns to a Knife Fight: A Case for Empirical Study of Counterinsurgency.” I wondered how this academic knew anything about counterinsurgency.

This was another reminder that when you reach a certain age, people you encounter may have lived multiple lives, had multiple careers, and had multiple acts. It took me a while to realize that Joe had one heck of a first act before coming to Stanford in 2011.

As I later discovered, Joe’s first act was 24 years in the Army Special Operations Forces (SOF), retiring as a Colonel.
His Special Forces time was with the 1st Special Forces Group as a team leader and later as a company commander. He did a tour with the 75th Ranger Regiment as a platoon leader. In 2005, he returned to West Point (where he earned his undergrad degree) and ran the Combating Terrorism Center. Putting theory into practice, he went to Iraq in 2008 as part of the 75th Ranger Regiment, in support of a Joint Special Operations Task Force. In 2010 Joe was in Afghanistan as the Commander of the Counterinsurgency Advisory and Assistance Team. At various points his Special Forces career took him to countries in Southeast Asia where counterinsurgency was not just academics.

Ironically, I was first introduced to Joe not at Stanford but through one of his other lives – that of an entrepreneur and businessman – at the company he founded, BMNT Partners. It was there that Joe and I along with another retired Army Colonel, Pete Newell, came up with the idea of creating the Hacking for Defense class. We combined the Lean Startup methodology – used by the National Science Foundation to commercialize science  – with the rapid problem sourcing and solution methodology Pete developed on the battlefields in Afghanistan and Iraq when he ran the US Army’s Rapid Equipping Force.

My interest was to get Stanford students engaged in national service and exposed to parts of the U.S. government where their traditional academic path and business career would never take them. (I have a strong belief that we’ve run a 44-year experiment with what happens when you disconnect the majority of Americans from any form of national service. And the result hasn’t been good for our country. Today if college students want to give back to their country, they think of Teach for America, the Peace Corps, or Americorps or perhaps the US Digital Service or the GSA’s 18F. Few consider opportunities to make the world safer with the Department of Defense, State Department, Intelligence Community or other government agencies.)

Joe, Pete and I would end up building a curriculum that would turn into a series of classes — first, Hacking for Defense, then Hacking for Diplomacy (with the State Department and Professor Jeremy Weinstein), Hacking for Energy, Hacking for Impact, etc.

Hacking For Defense
Our first Hacking for Defense class in 2016 blew past our expectations – and we had set a pretty high bar. (See the final class presentations here and here).

Our primary goal was to teach students entrepreneurship while they engaged in national public service.

Our second goal was to introduce our sponsors – the innovators inside the Department of Defense and Intelligence Community –  to a methodology that can help them understand and better respond to rapidly evolving asymmetric threats. We believed if we could get teams to rapidly discover the real problems in the field using Lean methods, and only then articulate the requirements to solve them, then defense acquisition programs could operate at speed and urgency and deliver timely and needed solutions.

Finally, we also wanted to show our sponsors in the Department of Defense that students can make meaningful contributions to understanding problems and rapid prototyping of solutions to real-world national security problems.

The Innovation Insurgency Spreads
Fast forward a year. Hacking for Defense is now offered at eight universities in addition to Stanford – Georgetown, University of PittsburghBoise StateUC San Diego, James Madison University, University of Southern Mississippi, and later this year University of Southern California and Columbia University. We established Hacking for Defense.org, a non-profit to train educators and provide a single point of contact for connecting the DOD/IC sponsor problems to these universities.

By the middle of this year Hacking For Defense started to feel like it had the same momentum as when my Lean LaunchPad class at Stanford got adopted by the National Science Foundation and became the Innovation Corps (I-Corps). I-Corps uses Lean Startup methods to teach scientists how to turn their discoveries into entrepreneurial, job-producing businesses. Over 1,000 teams of our nation’s best scientists have been through the program. It has changed how federally funded research is commercialized.

Recognizing that it’s a model for a government program that’s gotten the balance between public/private partnerships just right, last fall Congress passed the American Innovation and Competitiveness Act, making the National Science Foundation Innovation Corps a permanent part of the nation’s science ecosystem.

It dawned on Pete, Joe and me that perhaps we could get Congress to fund the national expansion of Hacking for Defense the same way. But serendipitously, the best person we were going to ask for help had already been thinking about this.

The Congressman From Science and Innovation
Before everyone else thought that teaching scientists how to build companies using Lean Methods might be a good for the country, there was one congressman who got it first.

In 2012, Rep. Dan Lipinski (D-Il), ranking member on the House Research and Technology Subcommittee, got on an airplane and flew to Stanford to see first-hand the class that would become I-Corps. For the first few years Lipinski was a lonely voice in Congress saying that we’ve found a better way to train our scientists to create companies and jobs. But over time, his colleagues became convinced that it was a non-partisan good idea. Rep. Lipinski was responsible for helping I-Corps proliferate through the federal government.

While Joe Felter and Pete Newell were thinking about approaching Congressman Lipinski about funding for Hacking for Defense Lipinski had already been planning to do so. As he recalled, “I was listening to your podcast as I was working in my backyard cutting, digging, chopping, etc. (yes, I do really work in my backyard,) when it dawned on me that funding Hacking for Defense as a national program – just like I did for the Innovation Corps – would be great for our nation’s defense when we are facing new unique threats. I tasked my staff to draft an amendment to the National Defense Authorization Act and I sponsored the amendment.”

(The successful outcome of I-Corps has given the Congressman credibility on entrepreneurship education among his peers. And it doesn’t hurt that he has a Ph.D and was a university professor before he ended up in Congress.)

Joe Felter and Pete Newell mobilized a network of Hacking for Defense supporters. Joe and Pete’s reputations preceded them on Capitol Hill, but in part a testament to the strength of Hacking for Defense, there’s now a large network of people who have experienced and believe in the program, and were willing to help out by writing letters of support, reaching out to other members of Congress to ask for support, and providing Congressman Lipinski’s office with information and background.

Congressman Lipinski led the amendment. He brought on co-sponsors from both sides of the aisle: Representatives Steve Knight (R-CA 25), Ro Khanna (D-CA 17), Anna Eshoo (D-CA 18), Seth Moulton (D-MA 6) and Carol Shea-Porter (D-NH 1).

On the floor of the House, Lipinski said, “Rapid, low-cost technological innovation is what makes Silicon Valley revolutionary, but the DOD hasn’t historically had the mechanisms in place to harness this American advantage. Hacking for Defense creates ways for talented scientists and engineers to work alongside veterans, military leaders, and business mentors to innovate solutions that make America safer.”

Last Friday the House unanimously approved an amendment to the National Defense Authorization Act authorizing the Hacking for Defense (H4D) program and enabling the Secretary of Defense to expend up to $15 million to support development of curriculum, best practices, and recruitment materials for the program.

This week the H4D amendment moves on to the Senate and Joe Felter moves on to the Pentagon. Both of those events have the potential to make our world a much safer place – today and tomorrow.

Innovation, Change and the Rest of Your Life

I gave the Alumni Day talk at U.C. Santa Cruz and had a few things to say about innovation.

—-

Even though I live just up the coast, I’ve never had the opportunity to start a talk by saying “Go Banana Slugs.”

I’m honored for the opportunity to speak here today.

We’re standing 15 air miles away from the epicenter of technology innovation. The home of some of the most valuable and fastest growing companies in the world.

I’ve spent my life in innovation, eight startups in 21 years, and the last 15 years in academia teaching it.

I lived through the time when working in my first job in Ann Arbor Michigan we had to get out a map to find out that San Jose was not only in Puerto Rico but there was a city with that same name in California.  And that’s where my plane ticket ought to take me to install some computer equipment.

39 years ago I got on that plane and never went back.

I’ve seen the Valley grow from Sunnyvale to Santa Clara to today where it stretches from San Jose to South of Market in San Francisco.  I’ve watched the Valley go from Microwave Valley – to Defense Valley – to Silicon Valley to Internet Valley. And to today, when its major product is simply innovation.  And I’ve been lucky enough to watch innovation happen not only in hardware and software but in Life Sciences – in Therapeutics, Medical Devices, Diagnostics and now Digital Health.

I’ve been asked to talk today about the future of Innovation – typically that involves giving you a list of hot technologies to pay attention to – technologies like machine learning.  The applications that will pour of this just one technology will transform every industry – from autonomous vehicles to automated radiology/oncology diagnostics.

Equally transformative on the life science side, CRISPR and CAS enable rapid editing of the genome, and that will change life sciences as radically as machine intelligence.

But today’s talk about the future of innovation is not about these technologies, or the applications or the new industries they will spawn.

In fact, it’s not about any specific new technologies.

The future of innovation is really about seven changes that have made innovation itself possible in a way that never existed before.

We’ve created a world where innovation is not just each hot new technology, but a perpetual motion machine.

So how did this happen?  Where is it going?

Silicon Valley emerged by the serendipitous intersection of:

  • Cold War research in microwaves and electronics at Stanford University,
  • a Stanford Dean of Engineering who encouraged startup culture over pure academic research,
  • Cold War military and intelligence funding driving microwave and military products for the defense industry in the 1950’s,
  • a single Bell Labs researcher deciding to start his semiconductor company next to Stanford in the 1950’s which led to
  • the wave of semiconductor startups in the 1960’s/70’s,
  • the emergence of Venture Capital as a professional industry,
  • the personal computer revolution in 1980’s,
  • the rise of the Internet in the 1990’s and finally
  • the wave of internet commerce applications in the first decade of the 21st century.
  • The flood of risk capital into startups at a size and scale that was not only unimaginable at its start, but in the middle of the 20th century would have seemed laughable.

Up until the beginning of this century, the pattern for the Valley seemed to be clear. Each new wave of innovation – microwaves, defense, silicon, disk drives, PCs, Internet, therapeutics, – was like punctuated equilibrium – just when you thought the wave had run its course into stasis, there emerged a sudden shift and radical change into a new family of technology. 

But in the 20th Century there were barriers to Entrepreneurship
In the last century, while startups continued to innovate in each new wave of technology, the rate of innovation was constrained by limitations we only now can understand. Startups in the past were constrained by:

  1. customers were initially the government and large companies and they adopted technology slowly,
  2. long technology development cycles (how long it takes to get from idea to product),
  3. disposable founders,
  4. the high cost of getting to first customers (how many dollars to build the product),
  5. the structure of the Venture Capital industry (there were a limited number of VC firms each needing to invest millions per startups),
  6. the failure rate of new ventures (startups had no formal rules and acted like smaller versions of large companies),
  7. the information and expertise about how to build startups (information was clustered in specific regions like Silicon Valley, Boston, New York, etc.), and there were no books, blogs or YouTube videos about entrepreneurship.

What we’re now seeing is The Democratization of Entrepreneurship
What’s happening today is something more profound than a change in technology. What’s happening is that these seven limits to startups and innovation have been removed.

The first thing that’s changed is that Consumer Internet and Genomics are Driving Innovation at scale
In the 1950’s and ‘60’s U.S. Defense and Intelligence organizations drove the pace of innovation in Silicon Valley by providing research and development dollars to universities, and defense companies built weapons systems that used the Valley’s first microwave devices and semiconductor components.

In the 1970’s, 80’s and 90’s, momentum shifted to the enterprise as large businesses supported innovation in PCs, communications hardware and enterprise software. Government and the enterprise are now followers rather than leaders.

Today, for hardware and software it’s consumers – specifically consumer Internet companies – that are the drivers of innovation. When the product and channel are bits, adoption by 10’s and 100’s of millions and even billions of users can happen in years versus decades.

For life sciences it was the Genentech IPO in 1980 that proved to investors that life science startups could make them a ton of money.

The second thing that’s changed is that we’re now Compressing the Product Development Cycle
In the 20th century startups I was part of, the time to build a first product release was measured in years as we turned out the founder’s vision of what customers wanted. This meant building every possible feature the founding team envisioned into a monolithic “release” of the product.

Yet time after time, after the product shipped, startups would find that customers didn’t use or want most of the features. The founders were simply wrong about their assumptions about customer needs. It turns out the term “visionary founder” was usually a synonym for someone who was hallucinating. The effort that went into making all those unused features was wasted.

Today startups build products differently. Instead of building the maximum number of features, founders treat their vision as a series of untested hypotheses, then get out of the building and test a minimum feature set in the shortest period of time.  This lets them deliver a series of minimal viable products to customers in a fraction of the time.

For products that are simply “bits” delivered over the web, a first product can be shipped in weeks rather than years.

The third thing is that Founders Need to Run the Company Longer
Today, we take for granted new mobile apps and consumer devices appearing seemingly overnight, reaching tens of millions of users – and just as quickly falling out of favor. But in the 20th century, dominated by hardware, software, and life sciences, technology swings inside an existing market happened slowly — taking years, not months. And while new markets were created (i.e. the desktop PC market), they were relatively infrequent.

This meant that disposing of the founder, and the startup culture responsible for the initial innovation, didn’t hurt a company’s short-term or even mid-term prospects.  So, almost like clockwork 20th century startups fired the innovators/founders when they scaled. A company could go public on its initial wave of innovation, then coast on its current technology for years. In this business environment, hiring a new CEO who had experience growing a company around a single technical innovation was a rational decision for venture investors.

That’s no longer the case.

The pace of technology change in the second decade of the 21st century is relentless. It’s hard to think of a hardware/software or life science technology that dominates its space for years. That means new companies face continuous disruption before their investors can cash out.

To stay in business in the 21st century, startups must do three things their 20th century counterparts didn’t:

  • A company is no longer built on a single innovation. It needs to be continuously innovating – and who best to do that? The founders.
  • To continually innovate, companies need to operate at startup speed and cycle time much longer their 20th century counterparts did. This requires retaining a startup culture for years – and who best to do that? The founders.
  • Continuous innovation requires the imagination and courage to challenge the initial hypotheses of your current business model (channel, cost, customers, products, supply chain, etc.) This might mean competing with and if necessary killing your own products. (Think of the relentless cycle of iPod then iPhone innovation.) Professional CEOs who excel at growing existing businesses find this extremely hard.  Who best to do that? The founders.

The fourth thing that’s changed is that you can start a company on your laptop For Thousands Rather than Millions of Dollars
Startups traditionally required millions of dollars of funding just to get their first product to customers. A company developing software would have to buy computers and license software from other companies and hire the staff to run and maintain it. A hardware startup had to spend money building prototypes and equipping a factory to manufacture the product.

Today open source software has slashed the cost of software development from millions of dollars to thousands. My students think of computing power as a utility like I think of electricity. They can get to more computing power via their laptop through Amazon Web Services than existed in the entire world when I started in Silicon Valley.

And for consumer hardware, no startup has to build their own factory as the costs are absorbed by offshore manufacturers.  China has simply become the factory.

The cost of getting the first product out the door for an Internet commerce startup has dropped by a factor of a 100 or more in the last decade.  Ironically, while the cost of getting the first product out the door has plummeted, it now can take 10’s or 100’s of millions of dollars to scale.

The fifth change is the New Structure of how startups get funded
The plummeting cost of getting a first product to market (particularly for Internet startups) has shaken up the Venture Capital industry.

Venture Capital used to be a tight club clustered around formal firms located in Silicon Valley, Boston, and New York. While those firms are still there (and getting larger), the pool of money that invests risk capital in startups has expanded, and a new class of investors has emerged.

First, Venture Capital and angel investing is no longer a U.S. or Euro-centric phenomenon. Risk capital has emerged in China, India and other countries where risk taking, innovation and liquidity are encouraged, on a scale previously only seen in the U.S.

Second, new groups of VCs, super angels, smaller than the traditional multi-hundred-million-dollar VC fund, can make small investments necessary to get a consumer Internet startup launched. These angels make lots of early bets and double-down when early results appear. (And the results do appear years earlier than in a traditional startup.)

Third, venture capital has now become Founder-friendly.

A 20th century VC was likely to have an MBA or finance background. A few, like John Doerr at Kleiner Perkins and Don Valentine at Sequoia, had operating experience in a large tech company. But out of the dot-com rubble at the turn of the 21st century, new VCs entered the game – this time with startup experience. The watershed moment was in 2009 when the co-founder of Netscape, Marc Andreessen, formed a venture firm and started to invest in founders with the goal to teach them how to be CEOs for the long term. Andreessen realized that the game had changed. Continuous innovation was here to stay and only founders – not hired execs – could play and win.  Founder-friendly became a competitive advantage for his firm Andreessen Horowitz. In a seller’s market, other VCs adopted this “invest in the founder” strategy.

Fourth, in the last decade, corporate investors and hedge funds have jumped into later stage investing with a passion. Their need to get into high-profile deals has driven late-stage valuations into unicorn territory.  A unicorn is a startup with a market capitalization north of a billion dollars.

What this means is that the emergence of incubators and super angels have dramatically expanded the sources of seed capital. VCs have now ceded more control to founders. Corporate investors and hedge funds have dramatically expanded the amount of money available. And the globalization of entrepreneurship means the worldwide pool of potential startups has increased at least 100-fold since the turn of this century.  And today there are over 200 startups worth over a billion dollars.

Change Number 6 is that Starting a Company means you no longer Act Like A Big Company
Since the turn of the century, there’s been a radical shift in how startups thought of themselves.  Until then investors and entrepreneurs acted like startups were simply smaller versions of large companies. Everything a large company did, a startup should do – write a business plan; hire sales, marketing, engineering; spec all the product features on day one and build everything for a big first customer ship.

We now understand that’s wrong.  Not kind of wrong but going out of business wrong.

What used to happen is you’d build the product, have a great launch event, everyone high-five the VP of Marketing for great press and then at the first board meeting ask the VP of Sales how he was doing versus the sales plan.  The response was inevitably “great pipeline.”  (Great pipeline means no real sales.)

This would continue for months, as customers weren’t behaving as per the business plan.  Meanwhile every other department in the company would be making their plan – meaning the company was burning cash without bringing in revenue.  Finally the board would fire the VP of sales.  This cycle would continue then you’d fire the VP of Marketing, then the CEO.

What we’ve learned is that while companies execute business models, startups search for a business model. It means that unlike in big companies startups are guessing about who their customers are, what features they want, where and how they want to buy the product, how much they want to pay.  We now understand that startups are just temporary organizations designed to search for a scalable and repeatable business models.

We now have specific management tools to grow startups. Entrepreneurs first map their assumptions and then test these hypotheses with customers out in the field (customer development) and use an iterative and incremental development methodology (agile development) to build the product. When founders discover their assumptions are wrong, as they inevitably will, the result isn’t a crisis, it’s a learning event called a pivot — and an opportunity to change the business model.

The result, startups now have tools that speed up the search for customers, reduce time to market and slash the cost of development. I’m glad to have been part of the team inventing the Lean Startup methodology.

Change number 7 – the last one – is perhaps the most profound and one students graduating today don’t even recognize. And it’s that Information is everywhere

In the 20th century learning the best practices of a startup CEO was limited by your coffee bandwidth. That is, you learned best practices from your board and by having coffee with other, more experienced CEOs. Today, every founder can read all there is to know about running a startup online. Incubators and accelerators like Y-Combinator have institutionalized experiential training in best practices (product/market fit, pivots, agile development, etc.); provide experienced and hands-on mentorship; and offer a growing network of founding CEOs.

The result is that today’s CEOs have exponentially more information than their predecessors. This is ironically part of the problem. Reading about, hearing about and learning about how to build a successful company is not the same as having done it. As we’ll see, information does not mean experience, maturity or wisdom. 

The Entrepreneurial Singularity
The barriers to entrepreneurship are not just being removed. In each case, they’re being replaced by innovations that are speeding up each step, some by a factor of ten.

And while innovation is moving at Internet speed, it’s not limited to just Internet commerce startups. It has spread to the enterprise and ultimately every other business segment. We’re seeing the effect of Amazon on retailers.  Malls are shutting down. Most students graduating today have no idea what a Blockbuster record/video store was. Many have never gotten their news from a physical newspaper.

If we are at the cusp of a revolution as important as the scientific and industrial revolutions what does it mean? Revolutions are not obvious when they happen. When James Watt started the industrial revolution with the steam engine in 1775 no one said, “This is the day everything changes.”  When Karl Benz drove around Mannheim in 1885, no one said, “There will be 500 million of these driving around in a century.” And certainly in 1958 when Noyce and Kilby invented the integrated circuit, the idea of a quintillion (10 to the 18th) transistors being produced each year seemed ludicrous.

It’s possible that we’ll look back to this decade as the beginning of our own revolution. We may remember this as the time when scientific discoveries and technological breakthroughs were integrated into the fabric of society faster than they had ever been before. When the speed of how businesses operated changed forever.

As the time when we reinvented the American economy and our Gross Domestic Product began to take off and the U.S. and the world reached a level of wealth never seen before.  It may be the dawn of a new era for a new American economy built on entrepreneurship and innovation.

Innovation – something both parties can agree on

icorps-logoOn the last day Congress was in session in 2016, Democrats and Republicans agreed on a bill that increased innovation and research for the country.

For me, seeing Congress pass this bill, the American Innovation and Competitiveness Act, was personally satisfying. It made the program I helped start, the National Science Foundation Innovation Corps (I-Corps) a permanent part of the nation’s science ecosystem. I-Corps uses Lean Startup methods to teach scientists how to turn their discoveries into entrepreneurial, job-producing businesses.  I-Corps bridges the gap between public support of basic science and private capital funding of new commercial ventures. It’s a model for a government program that’s gotten the balance between public/private partnerships just right. Over 1,000 teams of our nation’s best scientists have been through the program.

The bill directs the expansion of I-Corps to additional federal agencies and academic institutions, as well as through state and local governments.  The new I-Corps authority also supports prototype or proof-of-concept development activities, which will better enable researchers to commercialize their innovations. The bill also explicitly says that turning federal research into companies is a national goal to promote economic growth and benefit society. For the first time, Congress has recognized the importance of government-funded entrepreneurship and commercialization education, training, and mentoring programs specifically saying that this will improve the nation’s competitiveness. And finally this bill acknowledges that networks of entrepreneurs and mentors are critical in getting technologies translated from the lab to the marketplace.

uncle-sam-2This bipartisan legislation was crafted by senators Cory Gardner (R–CO) and Gary Peters (D–MI). Senator John Thune (R–SD) chairs the Senate commerce and science committee that crafted S. 3084. After years of contention over reauthorizing the National Science Foundation, House Science Committee Chairman Lamar Smith and Ranking Member Eddie Bernice Johnson worked to negotiate the agreement that enabled both the House and the Senate to pass this bill.

While I was developing the class at Stanford, it was my counterparts at the NSF who had the vision to make the class a national program.  Thanks to Errol Arkilic, Don Millard, Babu Dasgupta, Anita LaSalle (as well as current program leaders Lydia McClure, Steven Konsek) and the over 100 instructors at the 53 universities who teach the program across the U.S.

NSF I-Corps Oct 2011But I haven’t forgotten that before everyone else thought that teaching scientists how to build companies using Lean Methods might be a good for the country, there was one congressman who got it first.  lipinskiIN 2012, Representative Dan Lipinski (D-Il), co-chair of the House STEM Education Caucus, got on an airplane and flew to Stanford to see the class first-hand.

For the first few years Lipinski was a lonely voice in Congress saying that we’ve found a better way to train our scientists to create companies and jobs.

This bill is a reauthorization of the 2010 America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science (COMPETES) Act, which set out policies that govern the NSF, the National Institute of Standards and Technology (NIST), and federal programs on innovation, manufacturing, and science and math education. Reauthorization bills don’t fund an agency, but they provide policy guidance.  It resolved partisan differences over how NSF should conduct peer review and manage research.

I-Corps is the  accelerator that helps scientists bridge the commercialization gap between their research in their labs and wide-scale commercial adoption and use.

Why This Matters
While a few of the I-Corps teams are in web/mobile/cloud, most are working on advanced technology projects that don’t make TechCrunch. You’re more likely to see their papers (in material science, robotics, diagnostics, medical devices, computer hardware, etc.) in Science or Nature.

I-Corps uses everything we know about building Lean Startups and Evidence-based Entrepreneurship to connect innovation to entrepreneurship. It’s curriculum is built on a framework of business model design, customer development and agile engineering – and its emphasis on evidence, Lessons Learned versus demos, makes it the worlds most advanced accelerator. It’s success is measured not only by the technologies that leave the labs, but how many U.S. scientists and engineers we train as entrepreneurs and how many of them pass on their knowledge to students. I-Corps is our secret weapon to integrate American innovation and entrepreneurship into every U.S. university lab.

Every time I go to Washington and spend time at the National Science Foundation or National Institute of Health I’m reminded why the U.S. leads the world in support of basic and applied science.  It’s not just the money we pour into these programs (~$125 billion/year), but the people who have dedicated themselves to make the world a better place by advancing science and technology for the common good.

Congratulations to everyone in making the Innovation Corps a national standard.

So Here’s What I’ve Been Thinking…

I was interviewed at the Stanford Business School and in listening to the podcast, I realize I repeated some of my usual soundbites but embedded in the conversation were a few things I’ve never shared before about service.

Listen here:

Steve Blank on Silicon Valley, AI and the Future of Innovation

Download the .mp3 here:

Download Episode

The Innovation Insurgency Scales – Hacking For Defense (H4D)

Hacking for Defense is a battle-tested problem-solving methodology that runs at Silicon Valley speed. We just held our first Hacking for Defense Educators Class with 75 attendees.

h4d-ed-classThe results: 13 Universities will offer the course in the next year, government sponsors committed to keep sending hard problems to the course, the Department of Defense is expanding their use of H4D to include a classified version, and corporate partners are expanding their efforts to support the course and to create their own internal H4D courses.

It was a good three days.

————-

Another Tool for Defense Innovation
Last week we held our first 3-day Hacking for Defense Educator and Sponsor Class. Our goal in this class was to:

  1. Train other educators on how to teach the class at their schools.
  2. Teach Department of Defense /Intelligence Community sponsors how to deliver problems to these schools and how to get the most out of student teams.
  3. Create a national network of colleges and universities that use the Hacking for Defense Course to provide hundreds of solutions to critical national security problems every year.

What our sponsors have recognized is that Hacking for Defense is a new tool in the country’s Defense Innovation toolkit. In 1957 after the Soviet Union launched the Sputnik satellite the U.S. felt that it was the victim of a strategic technological surprise. DARPA was founded in 1958 to ensure that from then on the United States would be the initiator of technological surprises. It does so by funding research that promises the Department of Defense transformational change instead of incremental advances.

darpa-iqt-h4dBy the end of the 20th century the Central Intelligence Agency (CIA) realized that it was no longer the technology leader it had been when it developed the U-2, SR-71, and CORONA reconnaissance programs in the 1950’s and 1960’s. Its systems were struggling to manage the rapidly increasing torrent of information being collected. They realized that commercial applications of technology were often more advanced than those used internally. The CIA set up In-Q-Tel to be the venture capital arm of the intelligence community to speed the insertion of technologies. In-Q-Tel invests in startups developing technologies that provide ready-soon innovation (within 36 months) vital to the IC mission. More than 70 percent of the In-Q-Tel portfolio companies have never before done business with the government .

In the 21st century the DOD/IC community have realized that adversaries are moving at a speed that our traditional acquisition systems could not keep up with. Hacking for Defense combines the rapid problem sourcing and curation methodology developed on the battlefields in Afghanistan and Iraq by Colonel Pete Newell and the US Army’s Rapid Equipping Force with the Lean Startup practices that I pioneered in Silicon Valley and which are now the mainstay of the National Science Foundations’ I-Corps program. Hacking for Defense is a problem-solving methodology that offers the DOD/IC community a collaborative approach to innovation that provides ready-now innovation (within 12-36 months).

Train the Trainers
Pete Newell, Joe Felter and I learned a lot developing the Hacking for Defense class, more as we taught it, and even more as we worked with the problem sponsors in the DOD/Intel community.u-pitt-h4d Since one of our goals is to make this class available nationally, now it was time to pass on what we had learned and to train other educators how to teach the class and sponsors how to craft problems that student teams could work on.

(If you want a great overview of the Hacking for Defense class, stop and read this article from War on The Rocks. Seriously.)

sponsor-guide-coverWhen we developed our Hacking for Defense class, we created a ton of course materials (syllabus, slides, videos). In addition, for the Educator Class we captured all we knew about setting up and teaching the class and wrote a 290-page educator’s guide with suggested best practices, sample lesson plans, and detailed lecture scripts and slides for each class session. We developed a separate sponsor guide with ideas about how to get the most out of the student teams and the university.

The Educator Class: What We Learned
One of the surprises for me was seeing the value of having the Department of Defense and other government agency sponsors working together with the university educators.  (One bit of learning was that the sponsors portion of the workshop could have been a day shorter.)

Two other things we learned has us modifying the pedagogy of the class.

First, our mantra to the students has been to learn about “Deployment not Demos.” That meant we were asking the students to understand all parts of the mission model canvas, not just the beneficiaries and the value proposition. We wanted them to learn what it takes to get their product/service deployed to the field, not just have another demo to a general. This meant that the minimal viable products the students built were focused on maximizing their learning of what to build, not just building prototypes. While that worked great for the students, we learned from our sponsors that for some of them getting to deployment actually required demos as part of the means to reach this end. They wanted the students to start delivering MVPs early and often and use the sponsor feedback to accelerate their learning.

This conversation made us realize that we had skewed the class to maximize student learning without really appreciating what specific deliverables would make the sponsors feel that the time they’ve invested in the class was worthwhile. So for our next round of classes we will:

  • require sponsors to specifically define what success from their student team would look like
  • have students in the first week of class present what sponsors say success looks like
  • still encourage MVPs that maximize student learning, but also recognize that for some sponsors, learning could be accelerated with earlier functional MVPs

u-sd-h4dOur second insight that has changed the pedagogy also came from our sponsors. As most of our students have no military experience, we teach a 3-hour introduction to the DOD and Intel Community workshop. While that provides a 30,000-foot overview, it doesn’t describe any detail about the teams’ specific sponsoring organization (NSA, ARCYBER, 7th Fleet, etc.). (By the end of the quarter every team figures out how their sponsor ecosystem works.) The sponsors suggested that they offer a workshop early in the class and brief their student team on their organizations, budget, issues, etc.  We thought this was a great idea as this will greatly accelerate how teams target their customer discovery.  When we update the sponsor guide, we will suggest this to all sponsors.

Another surprise was how applicable the “Hacking for…” methodology is for other problems. Working with the State Department we are offering a Hacking for Diplomacy class at Stanford starting later this month. And we now have lots of interest from organizations that have realized that this problem-solving methodology is equally applicable to solving public safety, policy, community and social issues internationally and within our own communities. We’ll soon launch a series of new modules to address these deserving communities.

Lessons Learned

  • Hacking for Defense = problem-solving methodology for innovation insurgents inside the DOD/Intel Community
  • The program will scale to 13+ universities in 2017
  • There is demand to apply the problem-solving methodology to a range of public sector organizations where success is measured by impact and mission achievement versus revenue and profit.

The National Geospatial Intelligence Agency Goes Lean

We tend to associate the government with words like bureaucracy rather than lean innovation. But smart people within government agencies are working to change the culture and embrace new ways of doing things. The National Geospatial Intelligence Agency (NGA) is a great example.NGA

The NGA, an organization within the U.S. Department of Defense, delivers geospatial intelligence (satellite imagery video, and other sensor data) to policymakers, warfighters, intelligence professionals and first responders.

A team from their Enterprise Innovation Office has joined us at NYU as observers at our 5-day Lean LaunchPad class, while another team is in Silicon Valley with the Hacking for Defense team learning how to turn their hard problems into partnerships with commercial companies that lead to deployed solutions.


The Innovation Insurgency
Over the last year the National Geospatial Intelligence Agency (NGA) has become part of the “Innovation Insurgency” inside the U.S. Department of Defense by adopting Lean Methodology inside their agency.

In July the NGA hosted the inaugural 2016 Intelligence Community Innovation Conference with attendees from across the Department of Defense and public sector. At the conference Vice Chairman of the Joint Chiefs of Staff Air Force Gen. Paul Selva said, “Implementing innovation [in the government and large organizations] is like a turning battleship, you may have an upset crew with cooks having to clean up spilled food and sailors falling out of beds but that ship can turn with effort. The end result is often that change can happen but it is going to come at the cost of disruption and difficulty.”

The good news for the country is that the leadership of the National Geospatial Intelligence Agency has decided to turn the ship now.

To connect to innovation centers outside the agency, their research group has set up “NGA Outpost Valley” (NOV), an innovation outpost in Silicon Valley. The NOV is building an ecosystem of innovative companies around NGA’s hard problems to rapidly deploy solutions to solve them.

To promote innovation inside the NGA, they’ve staffed an Enterprise Innovation Office (EIO) to coach, educate and advise the entire agency, from core leadership to the operational edges, with methods and concepts of validated learning through rapid experimentation and customer development.

The NGA has adopted Lean Innovation methods to make this happen. The process starts by collecting agency-wide ideas and/or customer problems, collecting a group insight, and sorts which problems are important enough to pursue. The innovation process uses the Value Proposition canvas, customer development and the Mission Model Canvas to validate hypotheses and deliver minimum viable products. This process allows the agency to rapidly deliver projects at speed.

NGA Lean Innovation

To help start this innovation program the NGA’s Enterprise Innovation Office has had their innovation teams go through the already established Innovation-Corps classes at the National Security Agency (NSA), and they’re about to stand up their own Innovation-Corps curriculum inside the NGA. (The Innovation-Corps (I-Corps for short) Program is the Lean Innovation class I developed at Stanford and teach there and at Berkeley, Columbia and NYU. It was first adopted by the National Science Foundation and is now offered at 54 universities, and starting last year taught in all research agencies and the DOD.)

This past week a team from the NGA’s Enterprise Innovation Office observed the 5-day Lean LaunchPad class I’m teaching at NYU.  Their goal is to integrate these techniques into their own Lean innovation processes. From their comments and critiques of the students, they’re more then ready to teach it themselves.

At the same time the NGA Outpost Valley team was in Silicon Valley going through a Hacking for Defense workshop (we call a “sprint.”) Their goal was to translate one of their problems into a language that commercial companies in the valley could understand and solve, then to figure out how to get the product built and deployedLike other parts of the Department of Defense (the Joint Improvised Threat Defeat Agency (JIDA) and the Defense Innovation unit Experimental (DIUX),)  NGA’s Outpost Valley team is using a Hacking for Defense sprint to build a scalable process for recruiting industry and other partners to get solutions to real problems deployed at speed.

Putting lean principles into NGA’s acquisition practices
As part of the Department of Defense, the NGA acquires technology and information systems through the traditional DOD’s acquisition system – which has been described as the antitheses of rapid customer discovery and agile practices. The current acquisition system seldom validates whether a promised capability actually works until after the government is locked into a multiyear contract, and fixing those problems later often means cost overruns, late delivery, and under performance.  And as any startup will tell you, the traditional government acquisition processes create disincentives for startups to participate in the DOD Market. Few startups know where and how to find opportunities to sell to the DOD, they seldom have the resources or expertise to navigate DOD bureaucratic procurement requirements, and the 12 plus months it takes the government to enter into a contract makes it cost prohibitive for startups.

NGA researchA year ago Sue Gordon, the deputy director of the NGA, sent out an agency-wide memo that said in part, “…we must build speed and flexibility (agility) into our acquisition processes to respond to those evolutions. It is our job to acquire the technologies, data and services that NGA and the NSG need to execute our mission in the most effective, efficient and timely manner possible …”

In addition to NGA’s internal Lean Innovation process and innovation outpost in Silicon Valley, they are starting to use open innovation and crowdsourcing to attract commercial developers to tackle geospatial intelligence problems.

This week the NGA posted its first major open Challenge  – The NGA Disparate Data Challenge– on Challenge.gov, the U.S. government’s open innovation and crowdsourcing competition. Government agencies like the NGA can use the site to post challenges and award prizes to citizens who  find the best solutions. Putting a challenge on a crowdsourcing platform is a groundbreaking activity for the agency and opens the possibility for a number of benefits. 

  • Presenting a problem instead of a set of requirements to startups leaves the window open to uncover unknown solutions and insights
  • Setting up the challenge in two stages hopefully gets startups to participate while learning about the NGA and its technical needs
  • Asking for working solutions offers the potential for minimal viable acquisition to quickly validate who can solve the problem prior to committing large sums of taxpayer funds
  • Finding solutions at speed by shrinking the timeline for determining the viability of a solution without the need for executing any large scale contract.

The NGA Disparate Data Challenge has two stages.

  • Stage 1: teams have to demonstrate access and retrieval to analyze NGA provided datasets. (This data is a proxy for the difficulties associated with accessing and using NGA’s real classified data.)  Up to 15 teams who can do this can win $10,000.  And the winners get to go Stage 2.
  • Stage 2: the teams demo their solutions and other features they’ve added against a new data set live to an NGA panel of judges, in hackathon style competition. First place will take an additional $25,000; second $15,000; and third $10,000 with an opportunity to be part of a competitive pool for a future pilot contract with NGA.

NGA’s challenge is its first attempt to attract startups that otherwise would not do business with the agency. It’s likely that the prize amounts ($10-$25K) may be off by at least one order of magnitude to get a startup to take their eye off the commercial market. Curating a crowd and persuading them to work together because the work meets their value proposition is hard work that takes incubation not just prizes. However, this is a learning opportunity and a great beginning for the Department of Defense.

Challenges in Embracing Innovation in Government Agencies
Innovation in large organizations are fraught with challenges including; building an innovation pipeline without screwing up current product development, educating senior leadership and (at times intransigent) middle management about the difference between innovation and execution, encouraging hands-on customer development, establishing links between department and functional silos that don’t talk to each other (and often competing for resources), turning innovative prototypes and minimum viable products into deliverable products to customers, etc.

Government agencies have all these challenges and more. Government agencies have more stringent policies and procedures, federally regulated oversight and compliance rules, and line-item budgets for access to funding. In secure locations, IT security can hinder the simplest process while a lack of access to a physical collaboration space and access to data, all set up additional barriers to innovation.

The NGA has embraced promising moves to bring lean methods to the way they innovate internally and acquire technology. But what we’ve seen in other agencies in the Department of Defense is that unless the innovation process is run by, coached and scaled by innovators who have been in the DOD and understand these rules (and have the clearances), using off-the-shelf commercial lean innovation techniques in government agencies is likely to create demos for senior management but few fully deployed products. (The National Security Agency has pioneered getting this process right with the I-Corps@NSA.)

Lessons Learned

  • Lean Innovation teams are starting up at the National Geospatial Intelligence Agency (NGA)
    • NGA has an Innovation Outpost in Silicon Valley working on it’s first hacking for Defense Sprint 
    • NGA is experimenting with open innovation with its first problem on Challenge.gov
  • The goal of Lean in government agencies should mean deployment not demos
    • In order to successfully deliver products with speed and urgency, this requires coaches and instructors who have been the customer: warfighters, analysts, operators, etc.
    • It will take innovation built from the inside as well as acquisition from the outside to make it happen

Why the Navy Needs Disruption Now (part 2 of 2)

The future is here it’s just distributed unevenly – Silicon Valley view of tech adoption

The threat is here it’s just distributed unevenly – A2/AD and the aircraft carrier

This is the second of a two-part post following my stay on the aircraft carrier USS Carl Vinson. Part 1 talked about what I saw and learned – the layout of a carrier, how the air crew operates and how the carrier functions in context of the other ships around it (the strike group.) But the biggest learning was the realization that disruption is not just happening to companies, it’s also happening to the Navy. And that the Lean Innovation tools we’ve built to deal with disruption and create continuous innovation for large commercial organizations were equally relevant here.

This post offers a few days’ worth of thinking about what I saw. (If you haven’t, read part 1 first.)


The threat is here; it’s just distributed unevenly – A2/AD and the aircraft carrier
Both of the following statements are true:

  • The aircraft carrier is viable for another 30 years.
  • The aircraft carrier is obsolete.

Well-defended targets
Think of an aircraft carrier as a $11 billion dollar portable air force base manned by 5,000 people delivering 44 F/A-18 strike fighters anywhere in the world.

The primary roles of the 44 F/A-18 strike fighters that form the core of the carrier’s air wing is to control the air and drop bombs on enemy targets. For targets over uncontested airspace (Iraq, Afghanistan, Syria, Somalia, Yemen, Libya, etc.) that’s pretty easy. The problem is that First World countries have developed formidable surface-to-air missiles – the Russian S–300 and S-400 and the Chinese HQ-9 – which have become extremely effective at shooting down aircraft. And they have been selling these systems to other countries (Iran, Syria, Egypt, etc.). While the role of an aircraft carrier’s EA-18G Growlers is to jam/confuse the radar of these missiles, the sophistication and range of these surface-to-air missiles have been evolving faster than the jamming countermeasures on the EA-18G Growlers (and the cyber hacks to shut the radars down).

Hq9

This means that the odds of a carrier-based F/A-18 strike fighter successfully reaching a target defended by these modern surface-to-air missiles is diminishing yearly. Unless the U.S. military can take these air defense systems out with drones, cruise missiles or cyber attack, brave and skilled pilots may not be enough. Given the F/A-18’s are manned aircraft (versus drones), high losses of pilots may be (politically) unacceptable.

Vulnerable carriers
If you want to kill a carrier, first you must find it and then you have to track it. In WWII knowing where the enemy fleet located was a big – and critical – question. Today, photo imaging satellites, satellites that track electronic emissions (radio, radar, etc.) and satellites with synthetic aperture radar that can see through clouds and at night are able to pinpoint the strike group and carrier 24/7. In the 20th century only the Soviet Union had this capability. Today, China can do this in the Pacific and to a limited extent, Iran has this capability in the Persian Gulf. Soon there will be enough commercial satellite coverage of the Earth using the same sensors, that virtually anyone able to pay for the data will be able to track the ships.

During the Cold War the primary threat to carriers was from the air – from strike/fighters dropping bombs/torpedoes or from cruise missiles (launched from ships and planes). While the Soviets had attack submarines, our anti-Submarine Warfare (ASW) capabilities (along with very noisy Soviet subs pre-Walker spy ring) made subs a secondary threat to carriers.

In the 20th century the war plan for a carrier strike group used its fighter and attack aircraft and Tomahawk cruise missiles launched from the cruisers to destroy enemy radar, surface-to-air missiles, aircraft and communications (including satellite downlinks). As those threats are eliminated, the carrier strike can move closer to land without fear of attack. This allowed the attack aircraft to loiter longer over targets or extend their reach over enemy territory.

Carriers were designed to be most effective launching a high number of sorties (number of flights) from ~225 miles from the target. For example, we can cruise offshore of potential adversaries (Iraq and Syria) who can’t get to our carriers. (Carriers can standoff farther or can reach further inland, but they have to launch F-18’s as refueling tankers to extend the mission range. For example, missions into Afghanistan are 6-8 hours versus normal mission times of 2-3 hours.)

In the 21st century carrier strike groups are confronting better equipped adversaries, and today carriers face multiple threats before they can launch an initial strike. These threats include much quieter submarines, long-range, sea-skimming cruise missiles, and in the Pacific, a potential disruptive game changer – ballistic missiles armed with non-nuclear maneuverable warheads that can hit a carrier deck as it maneuvers at speed (DF-21d and the longer range DF-26).d21d range

In the Persian Gulf the carriers face another threat – Fast Inshore Attack Craft (FIAC) and speedboats with anti-ship cruise missiles that can be launched from shore.

The sum of all these threats – to the carrier-based aircraft and the carriers themselves –  are called anti-access/area denial (A2/AD) capabilities.

Eventually the cost and probability of defending the carrier as a manned aircraft platform becomes untenable in highly defended A2/AD environments like the western Pacific or the Persian Gulf. (This seems to be exactly the problem the manned bomber folks are facing in multiple regions.) But if not a carrier, what will they use to project power?  While the carrier might become obsolete, the mission certainly has not.

So how does/should the Navy solve these problems?

Three Horizons of Innovation
One useful way to think about in innovation in the face of increasing disruption / competition is called the “Three Horizons of Innovation.” It suggests that an organization should think about innovation across three categories called “Horizons.”

  • Horizon 1 activities support executing the existing mission with ever increasing efficiency
  • Horizon 2 is focused on extending the core mission
  • Horizon 3 is focused on searching for and creating brand new missions
    (see here for background on the Three Horizons.)

Horizon 1 is the Navy’s core mission. Here the Navy executes against a set of known mission requirements (known beneficiaries, known ships and planes, known adversaries, deployment, supply chain, etc.) It uses existing capabilities and has comparatively low risk to get the next improvement out the door.

In a well-run organization like the Navy, innovation and improvement occurs continuously in Horizon 1. Branches of the Navy innovate on new equipment, new tactics, new procurement processes, more sorties on newer carriers, etc. As fighter pilots want more capable manned aircraft and carrier captains want better carriers, it’s not a surprise that Horizon 1 innovations are upgrades – the next generation of carriers – Ford Class; and next generation of navy aircraft – the F-35C. As a failure here can impact the Navy’s current mission, Horizon 1 uses traditional product management tools to minimize risk and assure execution. (And yes, like any complex project they still manage to be over budget and miss their delivery schedule.)

Because failure here is unacceptable, Navy Horizon 1 programs and people are managed by building repeatable and scalable processes, procedures, incentives and promotions to execute and the mission.

In Horizon 2, the Navy extends its core mission. Here it looks for new opportunities within its existing mission (trying new technology on the same platform, using the same technology with new missions, etc.) Horizon 2 uses mostly existing capabilities (the carrier as an aircraft platform, aircraft to deliver munitions) and has moderate risk in building or securing new capabilities to get the product out the door.

An example of potential Naval Horizon 2 innovations is unmanned drones flying off carriers to do the jobs fighter pilots hate such as serving as airborne tankers (who wants to fly a gas tank around for 6 hours?) and ISR (Intelligence, Surveillance and Reconnaissance), another tedious mission flying around for hours that could be better solved with a drone downlinking ISR data for processing on board a ship.

However, getting the tanker and ISR functions onto drones only delays the inevitable shift to drones for strike, and then for fighters. The problem of strike fighters’ increasing difficulty in penetrating heavily defended targets isn’t going to get better with the new F-35C (the replacement for the F/A-18). In fact, it will get worse. Regardless of the bravery and skill of the pilots, they will face air defense systems evolving at a faster rate than the defensive systems on the aircraft. It’s not at all clear in a low-intensity conflict (think Bosnia or Syria) that civilian leadership will want to risk captured or killed pilots and losing planes like the F-35C that cost several hundred million dollars each.

Management in Horizon 2 works by pattern recognition and experimentation inside the current mission model. Ironically, institutional inertia keeps the Navy from deploying unmanned assets on carriers. In a perfect world, drones in carrier tanker and ISR roles should have been deployed by the beginning of this decade. And by now experience with them on a carrier deck could have led to first, autonomous wingmen and eventually autonomous missions. Instead the system appears to have fallen into the “real men fly planes and command Air Wings and get promoted by others who do” mindset.

The Navy does not lack drone demos and prototypes, but it has failed to deploy Horizon 2 innovations with speed and urgency. Failure to act aggressively here will impact the Navy’s ability to carry out its mission of sea control and power projection. (The Hudson Institute report on the future of the carrier is worth a read, and a RAND report on the same topic comes out in October.)

If you think Horizon 2 innovation is hard in the Navy, wait until you get to Horizon 3. This is where disruption happens. It’s how the aircraft carrier disrupted the battleship. How nuclear-powered ballistic missile submarines changed the nature of strategic deterrence, and how the DF-21/26 and artificial islands in the South China sea changed decades of assumptions.  And it’s why, in most organizations, innovation dies.

For the Navy, a Horizon 3 conversation would not be about better carriers and aircraft. Instead it would focus on the core reasons the Navy deploys a carrier strike group: to show the flag for deterrence, or to control part of the sea to protect shipping, or to protect a Marine amphibious force, or to project offensive power against any adversary in well-defended areas.

A Horizon 3 solution for the Navy would start with basic need of these missions (sea control, offensive power projection – sortie generation) the logistic requirements that come with them, and the barriers to their success like A2/AD threats. Lots of people have been talking and writing about this and lots of Horizon 3 concepts have been proposed such as Distributed LethalityArsenal Ships, underwater drone platforms, etc.

Focussing on these goals – not building or commanding carriers, or building and flying planes – is really, really hard.  It’s hard to get existing operational organizations to think about disruption because it means they have to be thinking about obsoleting a job, function or skill they’ve spent their lives perfecting. It’s hard because any large organization is led by people who succeeded as Horizon 1 and 2 managers and operators (not researchers). Their whole focus, career, incentives, etc. has been about building and make the current platforms work. And the Navy has excelled in doing so.

The problem is that Horizon 3 solutions take different people, different portfolio, different process and different politics.

People: In Horizon 1 and 2 programs people who fail don’t get promoted because in a known process failure to execute is a failure of individual performance. However, applying the same rules to Horizon 3 programs – no failures tolerated – means we’ll have no learning and no disruptive innovations. What spooks leadership is that in Horizon 3 most of the projects will fail. But using Lean Innovation they’ll fail quickly and cheaply.

In Horizon 3 the initial program is run by mavericks – the crazy innovators. In the Navy, these are the people you want to court martial or pass over for promotion for not getting with current program. (In a startup they’d be the founding CEO.) These are the fearless innovators you want to create new and potentially disruptive mission models. Failure to support their potential disruptive talent means it will go elsewhere.

Portfolio: In Horizon 3, the Navy is essentially incubating a startup. And not just one. The Navy needs a portfolio of Horizon 3 bets, for the same reason venture capital and large companies have a portfolio of Horizon 3 bets – most of these bets will fail – but the ones that succeed are game changers.

Process: A critical difference between a Horizon 3 bet and a Horizon 1 or 2 bet is that you don’t build large, expensive, multi-year programs to test radically new concepts (think of the Zumwalt class destroyers). You use “Lean” techniques to build Minimal Viable Products (MVPs). MVPs are whatever it takes to get you the most learning in the shortest period of time.

Horizon 3 groups operate with speed and urgency – the goal is rapid learning. They need to be physically separate from operating divisions in an incubator, or their own facility. And they need their own plans, procedures, policies, incentives and Key Performance Indicators (KPIs) different from those in Horizon 1.  

The watchwords in Horizon 3 are “If everything seems under control, you’re just not going fast enough.”

Politics: In Silicon Valley most startups fail. That’s why we invest in a portfolio of new ideas, not just one. We embrace failure as an integral part of learning. We do so by realizing that in Horizon 3 we are testing hypotheses – a series of unknowns – not executing knowns. Yet failure/learning is a dirty word in the world of promotions and the “gotcha game” of politics. To survive in this environment Horizon 3 leaders must learn how to communicate up/down and sideways that they are not running Horizon 1 and 2 projects.

Meanwhile, Navy and DOD leadership has to invest in, and clearly communicate their innovation strategy across all three Horizons.

Failure to manage innovation across all three Horizons and failure to make a portfolio of Horizon 3 bets means that the Navy is exposed to disruption by new entrants. Entrants unencumbered by decades of success, fueled by their own version of manifest destiny.

Lessons Learned

  • Our carriers are a work of art run and manned by professionals
    • Threats that can degrade or negate a carrier strike group exist in multiple areas
    • However, carriers are still a significant asset in almost all other combat scenarios
  • Speed and urgency rather than institutional inertia should be the watchwords for Horizon 2 innovation
  • Horizon 3 innovation is about a clean sheet of paper thinking
    • It’s what Silicon Valley calls disruption
    • It requires different people, portfolio, process and politics
  • The Navy (and DOD) must manage innovation across all three Horizons
    • Allocating dollars and resources for each
  • Remembering that todays Horizon 3 crazy idea is tomorrow Horizon 1 platform

Thanks to the crew of the U.S.S. Vinson, and Commander Todd Cimicata and Stanford for a real education about the Navy.

Why the Navy Needs Disruption Now (part 1 of 2)

The future is here it’s just distributed unevenly – Silicon Valley view of tech adoption

The threat is here it’s just distributed unevenly – A2/AD and the aircraft carrier

Sitting backwards in a plane with no windows, strapped in a 4-point harness, wearing a life preserver, head encased in a helmet, eyes covered by googles, your brain can’t process the acceleration. As the C-2 A Greyhound is hurled off an aircraft carrier into the air via a catapult, your body thrown forward in the air, until a few seconds later, hundreds of feet above the carrier now at 150 miles per hour you yell, “Holy Shxt.” And no one can hear you through the noise, helmet and ear protectors.


I just spent two days a hundred miles off the coast of Mexico aboard the U.S.S. Carl Vinson landing and taking off on the carrier deck via a small cargo plane.nimitz class carrier

Taking off and landing is a great metaphor for the carrier. It’s designed to project power – and when needed, violence.

It’s hard to spend time on a carrier and not be impressed with the Navy, and the dedicated people who man the carrier and serve their country. And of course that’s the purpose of the two-day tour. The Navy calls its program Outreach: Americas Navy. Targeting key influencers (who they call Distinguished Visitors,) the Navy hosts 900/year out to carriers off the West Coast and 500/year to carriers on the East Coast. These tours are scheduled when the carriers are offshore training, not when they are deployed on missions. I joined Pete Newell (my fellow instructor in the Hacking for Defense class) and 11 other Stanford faculty from CISAC and the Hoover Institution.

I learned quite a bit about the physical layout of a carrier, how the air crew operates and how the carrier functions in context of the other ships around it (the strike group.) But the biggest learning was the realization that disruption is not just happening to companies, it’s also happening to the Navy. And that the Lean Innovation tools we’ve built to deal with disruption and create continuous innovation for large commercial organizations were equally relevant here.

The Carrier
U.S. aircraft carriers like the Vinson (there are 9 others) are designed to put the equivalent of an Air Force base anywhere on any ocean anywhere in the world. This means the U.S. can show the flag for deterrence (don’t do this or it will be a bad day) or to control some part of the sea (to protect commercial and/or military shipping, or protect a Marine amphibious force – on the way or at a place they will land); and project power (a euphemism for striking targets with bombs and cruise missiles far from home).

On an aircraft carrier there are two groups of people – the crew needed to run the carrier, called the ship’s company, and the people who fly and support the aircraft they carry, called the Air Wing. The Vinson carries ~2,800 people in the ship’s company, ~2,000 in the Air Wing and ~150 staff.

Without the Air Wing the carrier would just be another big cruise ship. The Air Wing has 72 aircraft made up of jet and propeller planes. The core of the Air Wing are the 44 F/A-18 strike fighters.

The F/A-18 strike fighters are designed to do two jobs: gain air superiority by engaging other fighter planes in the air or attack targets on the ground with bombs (that’s why they have the F/A designation). Flying on missions with these strike fighters are specially modified F/A-18’s – EA-18G Growlers that carry electronic warfare jammers which electronically shut down enemy radars and surface-to-air missiles to ensure that the F/A-18s get to the target without being shot down.

Another type of plane on the carrier is the propeller-driven E-2C Hawkeyes, which is an airborne early warning plane. Think of the Hawkeyes as airborne air traffic control. Hawkeyes carry a long-range radar in a dome above the fuselage, and keep the strike group and the fighters constantly aware of incoming air threats. They can send data to the fighters and to other ships in the battle group which identifies the location of potential threats. They can also detect other ships at sea.

The other planes in the carrier’s Air Wing are 16 helicopters: 8 MH-60S Nighthawk helicopters for logistics support, search and rescue and special warfare support; and 8 MH-60R Seahawks to locate and attack submarines and to attack Surface targets. seahawk helicopterThey carry sonobuoys, dipping sonar and anti-submarine torpedoes. And last but not least, there is the plane that got us on the carrier, the C2-A Greyhound – the delivery truck for the carrier.

You’re not alone
Carriers like the Vinson don’t go to sea by themselves. They’re part of a group of ships called the “carrier strike group.”  A strike group consists of a carrier, two cruisers with Tomahawk cruise missiles which can attack land targets, and two destroyers and/or frigates with Aegis surface to air missiles to defend the carrier from air attack. (In the past, the strike group was assigned an attack submarine to hunt for subs trying to kill the carrier. Today the attack subs are in such demand they are assigned by national authorities on an as-needed basis.) The strike group also includes replenishment ships that carry spare ammunition, fuel, etc. (The 150 staff on the carrier include separate staff for the strike group, Air Wing, carrier, surface warfare (cruisers with tomahawk missiles) and air defense (Aegis-armed destroyers.)

strike groupThe strike group also receives antisubmarine intelligence from P-3/P-8 anti-submarine aircraft and towed arrays on the destroyers, and additional situational awareness from imaging, Electronic Intelligence (ELINT) and radar sensors and satellites.

Before our group flew out to the carrier, we were briefed by Vice-Admiral Mike Shoemaker. His job is aviation Type Commander (TYCOM) for all United States Navy naval aviation units (responsible for aircrew training, supply, readiness, etc.) He also wears another hat as the commander of all the Navy planes in the Pacific. It was interesting to hear that the biggest issue in keeping the airplanes ready to fight are sequestration and budget cuts. These cuts have impacted maintenance, and made spare parts hard to get. And no pay raises make it hard to retain qualified people.

Then it was time to climb into our C-2 Greyhound for the flight out to the aircraft carrier. Just like a regular passenger plane, except you put on a life vest, goggles, ear plugs, and over all that a half helmet protecting the top and back of your head while enclosing your ears in large plastic ear muffs. Then you and 25 other passengers load the plane via the rear ramp, sit facing backwards in a plane with no windows and wait to land.

On the U.S.S. Vinson
Landing on an aircraft carrier is an equally violent act. When you make an arrested landing, a tail hook on the plane traps one of the four arresting cables stretched across the deck, and you decelerate from 105 mph to zero in two seconds. When the plane hit the arresting wire on the carrier deck, it came to a dead stop in 250 feet. There was absolutely no doubt that we had landed (and a great lesson on why you were wearing head protection, goggles and strapped into your non-reclining seat with a four-point harness). As the rear ramp lowered, we were assaulted with the visual and audio cacophony of crewmen in seven different colored shirts on the deck swarming on and around F-18s, E2Cs, helicopters, etc., all with their engines running.

flight deck shirts

Captain Doug Verissimo and his executive officer Captain Eric Anduze, welcomed us to the carrier. (One of my first problems onboard was translating Navy ranks into their Army/Air Force equivalents. For example, a navy captain equals an Air Force/Army Colonel, and a rear admiral is a brigadier general, etc.)

flight deckThen for the next two days the carrier’s public affairs officer led us on the “shock and awe” tour. In four years in the Air Force I had been stationed on four fighter bases, three of them in war zones, some with over 150 planes generating lots of sorties. But I had to grudgingly admit that watching F­-18s landing on a 300-foot runway 60 feet above the water, on a pitching deck moving 30 mph at sea – one a minute – at night – was pretty impressive.  And having us stand on the deck less than 50 feet away from these planes as they landed trapping the arrestor wires, and launched via a catapult was a testament to the Navy’s PR acumen. Most of crew on the flight deck are in their late teens and maybe early 20s. (And for me, hard to believe 4 decades ago in some other life I was doing that job.) Standing on the deck on a Navy carrier, it’s impossible not to be impressed with the precision choreography of the crew and the skill of their pilots.

Our group climbed the ladders (inclined at a 68-degree angle – there are no stairs) up and down the 18 decks (floors) of the ship. We saw the hangar deck where planes were repaired, the jet engine shop, jet engine test cell, arresting cable engine room, the bridge where they steer the ship, the flag bridge (the command center for the admiral), the flight deck control and launch operations room (where the aircraft handler keeps track of all the aircraft on the flight deck and in the hangar), and the carrier air traffic control center (CATCC).LPO

At each stop an officer or enlisted man gave us an articulate description of what equipment we were looking at and how it fit into the rest of the carrier.

(What got left out of the tour was the combat direction center (CDC), the munitions elevators, ships engines and any of the avionics maintenance shops and of course, the nuclear reactor spaces.)

During lunch and dinners, we had a chance to talk at length to the officers and enlisted men. They were smart, dedicated and proud of what they do, and frank about the obstacles they face getting their jobs done. Interestingly they all echoed Vice-Admiral Shoemaker’s observation that the biggest obstacles they face are political –  sequestration and budget cuts.

Just before we left we got a briefing from the head of the Carrier Strike Group, Rear Admiral James T. Loeblein about the threats the carrier and the strike group face.

Then it was off to be catapulted back home.IMG_8187

It’s clear that the public affairs office has a finely tuned PR machine. So if the goal was to impress me that the Navy and carriers are well run and manned – consider it done.

However, it got me thinking… new aircraft carrier’s cost $11 billion. And we have a lot of them on order. Given the threats they are facing are they going to be viable for another 30 years? Or is the aircraft carrier obsolete?

Tomorrow’s post will offer a few days’ worth of thoughts about carriers, strike groups and how the Navy can continue to innovate with carriers and beyond.

Lessons Learned – part 1 of 2

Thanks to the crew of the U.S.S. Vinson, and Commander Todd Cimicata and Stanford for a real education about the Navy.

%d bloggers like this: