Driven to Distraction – the future of car safety

If you haven’t gotten a new car in a while you may not have noticed that the future of the dashboard looks like this:


That’s it. A single screen replacing all the dashboard gauges, knobs and switches. But behind that screen is an increasing level of automation that hides a ton of complexity.

At times everything you need is on the screen with a glance. At other times you have to page through menus and poke at the screen while driving. And while driving at 70mph, try to understand if you or your automated driving system is in control of your car. All while figuring out how to use any of the new features, menus or rearranged user interface that might have been updated overnight.

In the beginning of any technology revolution the technology gets ahead of the institutions designed to measure and regulate safety and standards. Both the vehicle’s designers and regulators will eventually catch up, but in the meantime we’re on the steep part of a learning curve – part of a million-person beta test – about what’s the right driver-to-vehicle interface.

We went through this with airplanes. And we’re reliving that transition in cars. Things will break, but in a few decades we’ll come out out the other side, look back and wonder how people ever drove any other way.

Here’s how we got here, what it’s going to cost us, and where we’ll end up.


Cars, Computers and Safety
Two massive changes are occurring in automobiles: 1) the transition from internal combustion engines to electric, and 2) the introduction of automated driving.

But a third equally important change that’s also underway is the (r)evolution of car dashboards from dials and buttons to computer screens. For the first 100 years cars were essentially a mechanical platform – an internal combustion engine and transmission with seats – controlled by mechanical steering, accelerator and brakes. Instrumentation to monitor the car was made up of dials and gauges; a speedometer, tachometer, and fuel, water and battery gauges.
By the 1970’s driving became easier as automatic transmissions replaced manual gear shifting and hydraulically assisted steering and brakes became standard. Comfort features evolved as well: climate control – first heat, later air-conditioning; and entertainment – AM radio, FM radio, 8-track tape, CD’s, and today streaming media. In the last decade GPS-driven navigation systems began to appear.

Safety
At the same time cars were improving, automobile companies fought safety improvements tooth and nail. By the 1970’s auto deaths in the U.S averaged 50,000 a year. Over 3.7 million people have died in cars in the U.S. since they appeared – more than all U.S. war deaths combined. (This puts auto companies in the rarified class of companies – along with tobacco companies – that have killed millions of their own customers.) Car companies argued that talking safety would scare off customers, or that the added cost of safety features would put them in a competitive price disadvantage. But in reality, style was valued over safety.

Safety systems in automobiles have gone through three generations – passive systems and two generations of active systems. Today we’re about to enter a fourth generation – autonomous systems.

Passive safety systems are features that protect the occupants after a crash has occurred. They started appearing in cars in the 1930’s. Safety glass in windshields appeared in the 1930’s in response to horrific disfiguring crashes. Padded dashboards were added in the 1950’s but it took Ralph Nader’s book, Unsafe at Any Speedto spur federally mandated passive safety features in the U.S. beginning in the 1960’s: seat belts, crumple zones, collapsible steering wheels, four-way flashers and even better windshields. The Department of Transportation was created in 1966 but it wasn’t until 1979 that the National Highway Traffic Safety Administration (NHTSA) started crash-testing cars (the Insurance Institute for Highway Safety started their testing in 1995). In 1984 New York State mandated seat belt use (now required in 49 of the 50 states.)

These passive safety features started to pay off in the mid-1970’s as overall auto deaths in the U.S. began to decline.

Active safety systems try to prevent crashes before they happen. These depended on the invention of low-cost, automotive-grade computers and sensors. For example, accelerometers-on-a-chip made airbags possible as they were able to detect a crash in progress. These began to appear in cars in the late 1980’s/1990’s and were required in 1998. In the 1990’s computers capable of real-time analysis of wheel sensors (position and slip) made ABS (anti-lock braking systems) possible. This feature was finally required in 2013.

Since 2005 a second generation of active safety features have appeared. They run in the background and constantly monitor the vehicle and space around it for potential hazards. They include: Electronic Stability Control, Blind Spot Detection, Forward Collision Warning, Lane Departure Warning, Rearview Video Systems, Automatic Emergency Braking, Pedestrian Automatic Emergency Braking, Rear Automatic Emergency Braking, Rear Cross Traffic Alert and Lane Centering Assist.

Autonomous Cars
Today, a fourth wave of safety features is appearing as Autonomous/Self-Driving features. These include Lane Centering/Auto Steer, Adaptive cruise control, Traffic jam assist, Self-parking, full self-driving. The National Highway Traffic Safety Administration (NHTSA) has adopted the six-level SAE standard to describe these vehicle automation features:

Getting above Level 2 is a really hard technical problem and has been discussed ad infinitum in other places. But what hasn’t got much attention is how drivers interact with these systems as the level of automation increases, and as the driving role shifts from the driver to the vehicle. Today, we don’t know whether there are times these features make cars less safe rather than more.

For example, Tesla and other cars have Level 2 and some Level 3 auto-driving features. Under Level 2 automation, drivers are supposed to monitor the automated driving because the system can hand back control of the car to you with little or no warning. In Level 3 automation drivers are not expected to monitor the environment, but again they are expected to be prepared to take control of the vehicle at all times, this time with notice.

Research suggests that drivers, when they aren’t actively controlling the vehicle, may be reading their phone, eating, looking at the scenery, etc. We really don’t know how drivers will perform in Level 2 and 3 automation. Drivers can lose situational awareness when they’re surprised by the behavior of the automation – asking: What is it doing now? Why did it do that? Or, what is it going to do next? There are open questions as to whether drivers can attain/sustain sufficient attention to take control before they hit something. (Trust me, at highway speeds having a “take over immediately” symbol pop up while you are gazing at the scenery raises your blood pressure, and hopefully your reaction time.)If these technical challenges weren’t enough for drivers to manage, these autonomous driving features are appearing at the same time that car dashboards are becoming computer displays.

We never had cars that worked like this. Not only will users have to get used to dashboards that are now computer displays, they are going to have understand the subtle differences between automated and semi-automated features and do so as auto makers are developing and constantly updating them. They may not have much help mastering the changes. Most users don’t read the manual, and, in some cars, the manuals aren’t even keeping up with the new features.

But while we never had cars that worked like this, we already have planes that do.
Let’s see what we’ve learned in 100 years of designing controls and automation for aircraft cockpits and pilots, and what it might mean for cars.

Aircraft Cockpits
Airplanes have gone through multiple generations of aircraft and cockpit automation. But unlike cars which are just first seeing automated systems, automation was first introduced in airplanes during the 1920s and 1930s.

For their first 35 years airplane cockpits, much like early car dashboards, were simple – a few mechanical instruments for speed, altitude, relative heading and fuel. By the late 1930’s the British Royal Air Force (RAF) standardized on a set of flight instruments. Over the next decade this evolved into the “Basic T” instrument layout – the de facto standard of how aircraft flight instruments were laid out.

Engine instruments were added to measure the health of the aircraft engines – fuel and oil quantity, pressure, and temperature and engine speed.

Next, as airplanes became bigger, and the aerodynamic forces increased, it became difficult to manually move the control surfaces so pneumatic or hydraulic motors were added to increase the pilots’ physical force. Mechanical devices like yaw dampers and Mach trim compensators corrected the behavior of the plane.

Over time, navigation instruments were added to cockpits. At first, they were simple autopilots to just keep the plane straight and level and on a compass course. The next addition was a radio receiver to pick up signals from navigation stations. This was so pilots could set the desired bearing to the ground station into a course deviation display, and the autopilot would fly the displayed course.

In the 1960s, electrical systems began to replace the mechanical systems:

  • electric gyroscopes (INS) and autopilots using VOR (Very High Frequency Omni-directional Range) radio beacons to follow a track
  • auto-throttle – to manage engine power in order to maintain a selected speed
  • flight director displays – to show pilots how to fly the aircraft to achieve a preselected speed and flight path
  • weather radars – to see and avoid storms
  • Instrument Landing Systems – to help automate landings by giving the aircraft horizontal and vertical guidance.

By 1960 a modern jet cockpit (the Boeing 707) looked like this:

While it might look complicated, each of the aircraft instruments displayed a single piece of data. Switches and knobs were all electromechanical.

Enter the Glass Cockpit and Autonomous Flying
Fast forward to today and the third generation of aircraft automation. Today’s aircraft might look similar from the outside but on the inside four things are radically different:

  1. The clutter of instruments in the cockpit has been replaced with color displays creating a “glass cockpit”
  2. The airplanes engines got their own dedicated computer systems – FADEC (Full Authority Digital Engine Control) – to autonomously control the engines
  3. The engines themselves are an order of magnitude more reliable
  4. Navigation systems have turned into full-blown autonomous flight management systems

So today a modern airplane cockpit (an Airbus 320) looks like this:

Today, airplane navigation is a real-world example of autonomous driving – in the sky. Two additional systems, the Terrain Awareness and Warning Systems (TAWS) and Traffic Condition Avoidance System (TCAS) gave pilots a view of what’s underneath and around them dramatically increasing pilots’ situation awareness and flight safety. (Autonomy in the air is technically a much simpler problem because in the cruise portion of flight there are a lot less things to worry about in the air than in a car.)

Navigation in planes has turned into autonomous “flight management.” Instead of a course deviation dial, navigation information is now presented as a “moving map” on a display showing the position of navigation waypoints, by latitude and longitude. The position of the airplane no longer uses ground radio stations, but rather is determined by Global Positioning System (GPS) satellites or autonomous inertial reference units. The route of flight is pre-programmed by the pilot (or uploaded automatically) and the pilot can connect the autopilot to autonomously fly the displayed route. Pilots enter navigation data into the Flight Management System, with a keyboard. The flight management system also automates vertical and lateral navigation, fuel and balance optimization, throttle settings, critical speed calculation and execution of take-offs and landings.

Automating the airplane cockpit relieved pilots from repetitive tasks and allowed less skilled pilots to fly safely. Commercial airline safety dramatically increased as the commercial jet airline fleet quadrupled in size from ~5,000 in 1980 to over 20,000 today. (Most passengers today would be surprised to find out how much of their flight was flown by the autopilot versus the pilot.)

Why Cars Are Like Airplanes
And here lies the connection between what’s happened to airplanes with what is about to happen to cars.

The downside of glass cockpits and cockpit automation means that pilots no longer actively operating the aircraft but instead monitor it. And humans are particularly poor at monitoring for long periods. Pilots have lost basic manual and cognitive flying skills because of a lack of practice and feel for the aircraft. In addition, the need to “manage” the automation, particularly when involving data entry or retrieval through a key-pad, increased rather than decreased the pilot workload. And when systems fail, poorly designed user interfaces reduce a pilot’s situational awareness and can create cognitive overload.

Today, pilot errors — not mechanical failures– cause at least 70-80% of commercial airplane accidents. The FAA and NTSB have been analyzing crashes and have been writing extensively on how flight deck automation is affecting pilots. (Crashes like Asiana 214 happened when pilots selected the wrong mode on a computer screen.) The FAA has written the definitive document how people and automated systems ought to interact.

In the meantime, the National Highway Traffic Safety Administration (NHTSA) has found that 94% of car crashes are due to human error – bad choices drivers make such as inattention, distraction, driving too fast, poor judgment/performance, drunk driving, lack of sleep.

NHTSA has begun to investigate how people will interact with both displays and automation in cars. They’re beginning to figure out:

  • What’s the right way to design a driver-to-vehicle interface on a screen to show:
    • Vehicle status gauges and knobs (speedometer, fuel/range, time, climate control)
    • Navigation maps and controls
    • Media/entertainment systems
  • How do you design for situation awareness?
    • What’s the best driver-to-vehicle interface to display the state of vehicle automation and Autonomous/Self-Driving features?
    • How do you manage the information available to understand what’s currently happening and project what will happen next?
  • What’s the right level of cognitive load when designing interfaces for decisions that have to be made in milliseconds?
    • What’s the distraction level from mobile devices? For example, how does your car handle your phone? Is it integrated into the system or do you have to fumble to use it?
  • How do you design a user interface for millions of users whose age may span from 16-90; with different eyesight, reaction time, and ability to learn new screen layouts and features?

Some of their findings are in the document Human-centric design guidance for driver-vehicle interfaces. But what’s striking is that very little of the NHSTA documents reference the decades of expensive lessons that the aircraft industry has learned. Glass cockpits and aircraft autonomy have traveled this road before. Even though aviation safety lessons have to be tuned to the different reaction times needed in cars (airplanes fly 10 times faster, yet pilots often have seconds or minutes to respond to problems, while in a car the decisions often have to be made in milliseconds) there’s a lot they can learn together. Aviation has gone 9 years in the U.S. with just one fatality, yet in 2017 37,000 people died in car crashes in the U.S.

There Are No Safety Ratings for Your Car As You Drive
In the U.S. aircraft safety has been proactive. Since 1927 new types aircraft (and each sub-assembly) are required to get a type approval from the FAA before it can be sold and be issued an Airworthiness Certificate.

Unlike aircraft, car safety in the U.S. has been reactive. New models don’t require a type approval, instead each car company self-certifies that their car meets federal safety standards. NHTSA waits until a defect has emerged and then can issue a recall.

If you want to know how safe your model of car will be during a crash, you can look at the National Highway Traffic Safety Administration (NHTSA) New Car Assessment Program (NCAP) crash-tests, or the Insurance Institute for Highway Safety (IIHS) safety ratings. Both summarize how well the active and passive safety systems will perform in frontal, side, and rollover crashes. But today, there are no equivalent ratings for how safe cars are while you’re driving them. What is considered a good vs. bad user interface and do they have different crash rates? Does the transition from Level 1, 2 and 3 autonomy confuse drivers to the point of causing crashes? How do you measure and test these systems? What’s the role of regulators in doing so?

Given the NHTSA and the FAA are both in the Department of Transportation (DoT), It makes you wonder whether these government agencies actively talk to and collaborate with each other and have integrated programs and common best practices. And whether they have extracted best practices from the NTSB. And from the early efforts of Tesla, Audi, Volvo, BMW, etc., it’s not clear they’ve looked at the airplane lessons either.

It seems like the logical thing for NHTSA to do during this autonomous transition is 1) start defining “best practices” in U/I and automation safety interfaces and 2) to test Level 2-4 cars for safety while you drive (like the crash tests but for situational awareness, cognitive load, etc. in a set of driving scenarios. (There are great university programs already doing that research.)

However, the DoT’s Automated Vehicles 3.0 plan moves the agency further from owning the role of “best practices” in U/I and automation safety interfaces. It assumes that car companies will do a good job self-certifying these new technologies. And has no plans for safety testing and rating these new Level 2-4 autonomous features.

(Keep in mind that publishing best practices and testing for autonomous safety features is not the same as imposing regulations to slow down innovation.)

It looks like it might take an independent agency like the SAE to propose some best practices and ratings. (Or the slim possibility that the auto industry comes together and set defacto standards.)

The Chaotic Transition
It took 30 years, from 1900 to 1930, to transition from horses and buggies in city streets to automobiles dominating traffic. During that time former buggy drivers had to learn a completely new set of rules to control their cars. And the roads in those 30 years were a mix of traffic – it was chaotic.
In New York City the tipping point was 1908 when the number of cars passed the number of horses. The last horse-drawn trolley left the streets of New York in 1917. (It took another decade or two to displace the horse from farms, public transport and wagon delivery systems.) Today, we’re about to undergo the same transition.

Cars are on the path for full autonomy, but we’re seeing two different approaches on how to achieve Level 4 and 5 “hands off” driverless cars. Existing car manufacturers, locked into the existing car designs, are approaching this step-wise – adding additional levels of autonomy over time – with new models or updates; while new car startups (Waymo, Zoox, Cruise, etc.) are attempting to go right to Level 4 and 5.

We’re going to have 20 or so years with the roads full of a mix of millions of cars – some being manually driven, some with Level 2 and 3 driver assistance features, and others autonomous vehicles with “hands-off” Level 4 and 5 autonomy. It may take at least 20 years before autonomous vehicles become the dominant platforms. In the meantime, this mix of traffic is going to be chaotic. (Some suggest that during this transition we require autonomous vehicles to have signs in their rear window, like student drivers, but this time saying, “Caution AI on board.”)

As there will be no government best practices for U/I or scores for autonomy safety, learning and discovery will be happening on the road. That makes the ability for car companies to have over-the-air updates for both the dashboard user interface and the automated driving features essential. Incremental and iterative updates will add new features, while fixing bad ones. Engaging customers to make them realize they’re part of the journey will ultimately make this a successful experiment.

My bet is much like when airplanes went to glass cockpits with increasingly automated systems, we’ll create new ways drivers crash their cars, while ultimately increasing overall vehicle safety.

But in the next decade or two, with the government telling car companies “roll your own”, it’s going to be one heck of a ride.

Lessons Learned

  • There’s a (r)evolution as car dashboards move from dials and buttons to computer screens and the introduction of automated driving
    • Computer screens and autonomy will both create new problems for drivers
    • There are no standards to measure the safety of these systems
    • There are no standards for how information is presented
  • Aircraft cockpits are 10 to 20 years ahead of car companies in studying and solving this problem
    • Car and aircraft regulators need to share their learnings
    • Car companies can reduce crashes and deaths if they look to aircraft cockpit design for car user interface lessons
  • The Department of Transportation has removed barriers to the rapid adoption of autonomous vehicles
    • Car companies “self-certify” whether their U/I and autonomy are safe
    • There are no equivalents of crash safety scores for driving safety with autonomous features
  • Over-the-air updates for car software will become essential
    • But the downside is they could dramatically change the U/I without warning
  • On the path for full autonomy we’ll have three generations of cars on the road
    • The transition will be chaotic, so hang on it’s going to a bumpy ride, but the destination – safety for everyone on the road – will be worth it

The Apple Watch – Tipping Point Time for Healthcare

I don’t own an Apple Watch. I do have a Fitbit. But the Apple Watch 4 announcement intrigued me in a way no other product has since the original IPhone. This wasn’t just another product announcement from Apple. It heralded the U.S. Food and Drug Administration’s (FDA) entrance into the 21stcentury. It is a harbinger of the future of healthcare and how the FDA approaches innovation.

Sooner than people think, virtually all home and outpatient diagnostics will be performed by consumer devices such as the Apple Watch, mobile phones, fitness trackers, etc. that have either become FDA cleared as medical devices or have apps that have received FDA clearance. Consumer devices will morph into medical grade devices, with some painful and well publicized mistakes along the way.

Let’s see how it turns out for Apple.


Smartwatches are the apex of the most sophisticated electronics on the planet. And the Apple Watch is the most complex of them all. Packed inside a 40mm wide, 10 mm deep package is a 64-bit computer, 16gbytes of memory, Wi-Fi, NFC, cellular, Bluetooth, GPS, accelerometer, altimeter, gyroscope, heart rate sensor, and an ECG sensor – displaying it all on a 448 by 368 OLED display.
When I was a kid, this was science fiction.  Heck, up until its first shipment in 2015, it was science fiction.

But as impressive as its technology is, the Apple’s smartwatch has been a product looking for a solution. At first, positioned as a fashion statement, it seemed like the watch was actually an excuse to sell expensive wristbands. Subsequent versions focused on fitness and sports – the watch was like a Fitbit– plus the ability to be annoyed by interruptions from your work. But now the fourth version of the Watch might have just found the beginnings of “gotta have it” killer applications – healthcare – specifically medical diagnostics and screening.

Healthcare on Your Wrist
Large tech companies like Google, Amazon, Apple recognize that the  multi-trillion dollar health care market is ripe for disruption and have poured billions of dollars into the space. Google has been investing in a broad healthcare portfolio, Amazon has been investing in pharmacy distribution and Apple…? Apple has been focused on turning the Apple Watch into the future of health screening and diagnostics.

Apples latest Watch – with three new healthcare diagnostics and screening apps – gives us a glimpse into what the future of healthcare diagnostics and screening could look like.

The first new healthcare app on the Watch is Fall Detection. Perhaps you’ve seen the old commercials where someone falls and can’t get up, and has a device that calls for help. Well this is it – built into the watch. The watch’s built-in accelerometer and gyroscope analyze your wrist trajectory and impact acceleration to figure out if you’ve taken a hard fall. You can dismiss the alert, or have it call 911. Or, if you haven’t moved after a minute, it can call emergency services, and send a message along with your location.

If you’re in Apple’s current demographic you might think, “Who cares?” But if you have an aged parent, you might start thinking, “How can I get them to wear this watch?”

The second new healthcare app also uses the existing optical sensor in the watch and running in the background, gathers heart data and has an algorithm that can detect irregular heart rhythms. If it senses something is not right, up pops up an alert. A serious and common type of irregular heart rhythm is atrial fibrillation (AFib). AFib happens when the atria—the top two chambers of the heart get out of sync, and instead of beating at a normal 60 beats a minute it may quiver at 300 beats per minute.

This rapid heartbeat allows blood to pool in the heart, which can cause clots to form and travel to the brain, causing a stroke. Between 2.7 and 6.1 million people in the US have AFib (2% of people under 65 have it, while 9% of people over 65 years have it.) It puts ~750,000 people a year in the hospital and contributes to ~130,000 deaths each year. But if you catch atrial fibrillation early, there’s an effective treatment — blood thinners.

If your watch gives you an irregular heart rhythm alert you can run the third new healthcare app – the Electrocardiogram.

The Electrocardiogram (ECG or EKG) is a visual presentation of whether your heart is working correctly. It records the electrical activity of the heart and shows doctors the rhythm of heartbeats, the size and position of the chambers of the heart, and any damage to the heart’s muscle. Today, ECGs are done in a doctor’s office by having you lie down, and sticking 10 electrodes to your arms, legs and chest. The heart’s electrical signals are then measured from twelve angles (called “leads”).

With the Apple Watch, you can take an ECG by just putting your finger on the crown for 30 seconds. To make this work Apple has added two electrodes (the equivalent of a single lead), one on the back of the watch and another on the crown. The ECG can tell you that you may have atrial fibrillation (AFib) and suggest you see a doctor. As the ECG is saved in a PDF file (surprisingly it’s not also in the HL7’s FHIR Format), you can send it to your doctor, who may decide no visit is necessary.

These two apps, the Electrocardiogram and the irregular heart rhythms, are serious health screening tools. They are supposed to ship in the U.S. by the end of 2018. By the end of next year, they can be on the wrists of tens of millions of people.

The question is are they are going to create millions of unnecessary doctors’ visits from unnecessarily concerned users or are they going to save thousands of lives?  My bet is both – until traditional healthcare catches up with the fact that in the next decade screening devices will be in everyone’s hands (or wrists.)

Apple and The FDA – Clinical Trials
In the U.S. medical devices, drugs and diagnostics are regulated by the Food and Drug Administration – the FDA. What’s unique about the Apple Watch is that both the Electrocardiogram and the irregular heart rhythms apps required Apple to get clearance from the FDA. This is a very big deal.

The FDA requires evidence that medical devices do what they claim. To gather that evidence companies enroll volunteers in a study – called a clinical trial – to see if the device does what the company thinks it will.

Stanford University has been running a clinical trial on irregular heart rhythms for Apple since 2017 with a completion date in 2019. The goal is to see if an irregular pulse notification is really atrial fibrillation, and how many of those notified contacted a doctor within 90 days. (The Stanford study appears to be using previous versions of the Apple Watch with just the optical sensor and not the new ECG sensors. They used someone else’s wearable heart monitor to detect the Afib.)

Nov 1 2018 Update – the design of the Stanford Apple Watch study published here

To get FDA clearance, Apple reportedly submitted two studies to the FDA (so far none of the data has been published or peer reviewed). In one trial with 588 people, half of whom were known to have AFib and the other half of whom were healthy, the app couldn’t read 10% of the recordings. But for the other 90%, it was able to identify over 98% of the patients who had AFib, and over 99% of patients that had healthy heart rates.

The second data set Apple sent the FDA was part of Stanford’s Apple Heart Study. The app first identified 226 people with an irregular heart rhythm. The goal was to see how well the Apple Watch could pick up an event that looked like atrial fibrillation compared to a wearable heart monitor. The traditional monitors identified that 41 percent of people had an atrial fibrillation event. In 79 percent of those cases, the Apple app also picked something up.

This was good enough for the FDA.

The FDA – Running Hard to Keep Up With Disruption
And “good enough” is a big idea for the FDA. In the past the FDA was viewed as inflexible and dogmatic by new companies while viewed as insufficiently protective by watchdog organizations.

For the FDA this announcement was as important for them as it was for Apple.

The FDA has to adjudicate between a whole host of conflicting constituents and priorities. Its purpose is to make sure that drugs, devices, diagnostics, and software products don’t harm thousands or even millions of people so the FDA wants a process to make sure they get it right. This is a continual trade-off between patient safety, good enough data and decision making, and complete clinical proof. On the other hand, for a company, a FDA clearance can be worth hundreds of millions or even billions of dollars. And a disapproval or delayed clearance can put a startup out of business. Finally, the rate of change of innovation for medical devices, diagnostics and digital health has moved faster than the FDA’s ability to adapt its regulatory processes. Frustrated by the FDA’s 20th century processes for 21st century technology, companies hired lobbyists to force a change in the laws that guide the FDA regulations.

So, the Apple announcement is a visible signal in Washington that the FDA is encouraging innovation. In the last two years the FDA has been trying to prove it could keep up with the rapid advancements in digital health, devices and diagnostics- while trying to prevent another Theranos.

Since the appointment of the new head of the FDA, there has been very substantial progress in speeding up mobile and digital device clearances with new guidelines and policies. For example, in the last year the FDA announced its Pre-Cert pilot program which allows companies making software as a medical device to build products without each new device undergoing the FDA clearance process. The pilot program allowed nine companies, including Apple, to begin developing products (like the Watch) using this regulatory shortcut. (The FDA has also proposed new rules for clinical support software that say if doctors can review and understand the basis of the software’s decision, the tool does not have to be regulated by the FDA.)

This rapid clearance process as the standard – rather than the exception – is a sea-change for the FDA. It’s close to de-facto adopting a Lean decision-making process and rapid clearances for things that minimally affect health. It’s how China approaches approvals and will allow U.S. companies to remain competitive in an area (medical devices) where China has declared the intent to dominate.

Did Apple Cut in Front of the Line?
Some have complained that the FDA has been too cozy with Apple over this announcement.

Apple got its two FDA Class II clearances through what’s called a “de novo” pathway, meaning Apple claimed these features were the first of its kind. (It may be the first one built into the watch, but it’s not the first Apple Watch ECG app cleared by the FDA – AliveCor, got over-the-counter-clearance in 2014 and Cardiac Designs in 2013.) Critics said that the De Novo process should only be used where there is no predicate (substantial equivalence to an already cleared device.) But Apple cited at least one predicate, so if they followed the conventional 510k approval process, that should have taken at least 100 days. Yet Apple got two software clearances in under 30 days, which uncannily appeared the day before their product announcement.

To be fair to Apple, they were likely holding pre-submission meetings with the FDA for quite some time, perhaps years. One could speculate that using the FDA Pre-Cert pilot program they consulted on the design of the clinical trial, trial endpoints, conduct, inclusion and exclusion criteria, etc. This is all proper medical device company thinking and exactly how consumer device companies need to approach and work with the FDA to get devices or software cleared. And it’s exactly how the FDA should be envisioning its future.

Given Apple sells ~15 million Apple Watches a year, the company is about to embark on a public trial at massive scale of these features – with its initial patient population at the least risk for these conditions. It will be interesting to see what happens. Will overly concerned 20- and 30-year-olds flood doctors with false positives? Or will we be reading about lives saved?

Why most consumer hardware companies aren’t medical device and diagnostic companies
Historically consumer electronics companies and medical device and diagnostic companies were very different companies. In the U.S. medical device and diagnostic products require both regulatory clearance from the FDA and reimbursement approval by different private and public insurers to get paid for the products.

These regulatory and reimbursement agencies have very different timelines and priorities than for-profit companies. Therefore, to get FDA clearance a critical part of a medical device company is spent building a staff and hiring consultants such as clinical research organizations who can master and navigate FDA regulations and clinical trials.

And just because a company gets the FDA to clear their device/diagnostic/software doesn’t mean they’ll get paid for it. In the U.S. medical devices are reimbursed by private insurance companies (Blue Cross/Blue Shield, etc.) and/or the U.S. government via Centers for Medicare & Medicaid Services (CMS). Getting these clearances to get the product covered, coded and paid is as hard as getting the FDA clearance, often taking another 2-3 years. Mastering the reimbursement path requires a company to have yet another group of specialists conduct expensive clinical cost outcomes studies.

The Watch announcement telegraphed something interesting about Apple – they’re one of the few consumer products company to crack the FDA clearance process (Philips being the other). And going forward, unless these new apps are a disaster, it opens the door for them to add additional FDA-cleared screening and diagnostic tools to the watch (and by extension a host of AI-driven imaging diagnostics (melanoma detection, etc.) to the iPhone.) This by itself is a key differentiator for the Watch as a healthcare device.

The other interesting observation: Unlike other medical device companies, Apple’s current Watch business model is not dependent on getting insurers to pay for the watch. Today consumers pay directly for the Watch. However, if the Apple Watch becomes a device eligible for reimbursement, there’s a huge revenue upside for Apple. When and if that happens, your insurance would pay for all or part of an Apple Watch as a diagnostic tool.

(After running cost outcome studies, insurers believe that preventative measures like staying fit brings down their overall expense for a variety of conditions. So today some life insurance companies are mandating the use of an activity tracker like Apple Watch.)

The Future of SmartWatches in Healthcare
Very few companies (probably less than five) have the prowess to integrate sensors, silicon and software with FDA regulatory clearance into a small package like the Apple Watch.

So what else can/will Apple offer on the next versions of the Watch? After looking through Apple’s patents, here’s my take on the list of medical diagnostics and screening apps Apple may add.

Sleep Tracking and Sleep Apnea Detection
Compared to the Fitbit, the lack of a sleep tracking app on the Apple Watch is a mystery (though third-party sleep apps are available.) Its absence is surprising as the Watch can theoretically do much more than just sleep tracking – it can potentially detect Sleep Apnea. Sleep apnea happens when you’re sleeping, and your upper airway becomes blocked, reducing or completely stopping air to your lungs. This can cause a host of complications including Type 2 diabetes, high blood pressure, liver problems, snoring, daytime fatigueToday diagnosing sleep apnea often requires an overnight stay in a sleep study clinicSleep apnea screening doesn’t appear to require any new sensors and would be a great app for the Watch. Perhaps the app is missing because you have to take the watch off and recharge it every night?

Pulse oximetry
Pulse oximetry is a test used to measure the oxygen level (oxygen saturation) of the blood. The current Apple Watch can already determine how much oxygen is contained in your blood based on the amount of infrared light it absorbs. But for some reason Apple hasn’t released this feature – FDA regulations? Inconsistent readings?  Another essential Watch health app that may or may not require any new sensors.

Respiration rate
Respiration rate (the number of breaths a person takes per minute) along with blood pressure, heart rate and temperature make up a person’s vital signs. Apple has a patent for this watch feature but for some reason hasn’t released it – FDA regulations?  Inconsistent readings?  Another essential Watch health app that doesn’t appear to require any new sensors.

Blood Pressure
About 1/3rd of Americans have high blood pressure. High blood pressure increases the risk of heart disease and stroke. It often has no warning signs or symptoms. Many people do not know they have it and only half of those have it under control. Traditionally measuring blood pressure requires a cuff on the arm and produces a single measurement at a single point in time. We’ve never had the ability to continually monitor a person’s blood pressure under stress or sleep. Apple filed two patents in 2017 to measure blood pressure by holding the watch against your chest. This is tough to do, but it would be another great health app for the Watch that may or may not require any new sensors.

Sunburn/UV Detector
Apple has patented a new type of sensor – a sunscreen detector to let you know what exposed areas of the skin of may be at elevated UV exposure risk. I’m not big on this, but the use of ever more powerful sunscreens has quadrupled, while at the same time, the incidence of skin cancers has also quadrupled, so there may be a market here.

Parkinson’s Disease Diagnosis and Monitoring
Parkinson’s Disease is a brain disorder that leads to shaking, stiffness, and difficulty with walking, balance, and coordination. It affects 1/% of people over 60. Today, there is no diagnostic test for the disease (i.e. blood test, brain scan or EEG). Instead, doctors look for four signs: tremor, rigidity, Bradykinesia/akinesia and Postural instability. Today patients have to go to a doctor for tests to rate the severity of their symptoms and keep a diary of their symptoms.

Apple added a new “Movement Disorder API” to its ResearchKit framework that supports movement and tremor detection. It allows an Apple Watch to continuously monitor for Parkinson’s disease symptoms; tremors and Dyskinesia, a side-effect of treatments for Parkinson’s that causes fidgeting and swaying motions in patients. Researchers have built a prototype Parkinson’s detection app on top of it. It appears that screening for Parkinson’s would not require any new sensors – but likely clinical trials and FDA clearance – and would be a great app for the Watch.

Glucose Monitoring
More than 100 million U.S. adults live with diabetes or prediabetes. If you’re a diabetic, monitoring your blood glucose level is essential to controlling the disease. However, it requires sticking your finger to draw blood multiple times a day. The holy grail of glucose monitoring has been a sensor that can detect glucose levels through the skin. This sensor has been the graveyard of tons of startups that have crashed and burned pursuing this. Apple has a patent application that looks suspiciously like a non-invasive glucose monitoring sensor for the Apple Watch. This is a really tough technical problem to solve, and even if the sensor works, there would be a long period of clinical trials for FDA clearance, but this app would be a game changer for diabetic patients – and Apple – if they can make it happen.

Sensor and Data Challenges
With many of these sensors just getting a signal is easy. Correlating that particular signal to an underlying condition and avoiding being confounded by other factors is what makes achieving medical device claims so hard.

As medical grade data acquisition becomes possible, continuous or real time transmission will store and report baseline data on tens of millions of “healthies” that will be vital in training the algorithms and eventually predicting disease earlier. This will eventually enable more accurate diagnostics on less data, and make the data itself – especially the transition from healthy to diseased – incredibly valuable.

However, this sucks electrons out of batteries and plays on the edge electrical design and the laws of physics, but Apple’s prowess in this area is close to making this possible.

What’s Not Working?
Apple has attempted to get medical researchers to create new health apps by developing ResearchKit, an open source framework for researchers. Great idea. However, given the huge potential for the Watch in diagnostics, ResearchKit and the recruitment of Principal Investigators feels dramatically under resourced. (It took three years to go from ResearchKit 1.0 to 2.0).  Currently, there are just 11 ResearchKit apps on the ITunes store. This effort – Apple software development and third-party app development – feels understaffed and underfunded. Given the potential size of the opportunity, the rhetoric doesn’t match the results and the results to date feel off by at least 10x.

Apple needs to act more proactively and directly fund some of these projects with grants to specific principal investigators and build a program of scale. (Much like the NIH SBIR program.) There should be as sustained commitment to at least several new FDA cleared screening/diagnostic apps every year for Watch and iPhone from Apple.

The Future
Although the current demographics of the Apple Watch skews young, the populations of the U.S., China, Europe and Japan continue to age, which in turn threatens to overwhelm healthcare systems. Having an always on, real-time streaming of medical data to clinicians, will change the current “diagnosis on a single data point and by appointment” paradigm. Wearable healthcare diagnostics and screening apps open an entirely new segment for Apple and will change the shape of healthcare forever.

Imagine a future when you get an Apple Watch (or equivalent) through your insurer to monitor your health for early warning signs of heart attack, stroke, Parkinson’s disease and to help you monitor and manage diabetes, as well as reminding you about medications and tracking your exercise. And when combined with an advanced iPhone with additional FDA cleared screening apps for early detection of skin cancer, glaucoma, cataracts, and other diseases, the future of your health will truly be in your own hands.

Outside the U.S., China is plowing into this with government support, private and public funding, and a China FDA (CFDA) approval process that favors local Chinese solutions. There are well over 100 companies in China alone focusing in this area, many with substantial financial and technical support.

Let’s hope Apple piles on the missing resources for diagnostics and screening apps and grabs the opportunity.

Lessons Learned

  • Apple’s new Watch has two heart diagnostic apps cleared by the FDA
    • This is a big deal
  • In a few years, home and outpatient diagnostics will be performed by wearable consumer devices – Apple Watch, mobile phones or fitness trackers
    • Collecting and sending health data to doctors as needed
    • Collecting baseline data on tens of millions of healthy people to train disease prediction algorithms
  • In the U.S. the FDA has changed their mobile and digital device guidelines and policies to make this happen
  • Insurers will ultimately will be paying for diagnostic wearables
  • Apple has a series of patents for additional Apple Watch sensors – glucose monitoring, blood pressure, UV detection, respiration
    • The watch is already capable of detecting blood oxygen level, sleep apnea, Parkinson’s disease
    • Getting a signal from a sensor is the easy part. Correlating that signal to an underlying condition is hard
    • They need to step up their game – money, software, people – with the medical research community
  • China has made building a local device and diagnostic industry one of their critical national initiatives

The End of More – The Death of Moore’s Law

 A version of this article first appeared in IEEE Spectrum.

For most of our lives the idea that computers and technology would get, better, faster, cheaper every year was as assured as the sun rising every morning. The story “GlobalFoundries Stops All 7nm Development“ doesn’t sound like the end of that era, but for anyone who uses an electronic device, it most certainly is.

Technology innovation is going to take a different direction.


GlobalFoundries was one of the three companies that made the most advanced silicon chips for other companies (AMD, IBM, Broadcom, Qualcomm, STM and the Department of Defense.) The other foundries are Samsung in South Korea and TSMC in Taiwan. Now there are only two pursuing the leading edge.

This is a big deal.

Since the invention of the integrated circuit ~60 years ago, computer chip manufacturers have been able to pack more transistors onto a single piece of silicon every year. In 1965, Gordon Moore, one of the founders of Intel, observed that the number of transistors was doubling every 24 months and would continue to do so. For 40 years the chip industry managed to live up to that prediction. The first integrated circuits in 1960 had ~10 transistors. Today the most complex silicon chips have 10 billion. Think about it. Silicon chips can now hold a billion times more transistors.

But Moore’s Law ended a decade ago. Consumers just didn’t get the memo.

No More Moore – The End of Process Technology Innovation
Chips are actually “printed,” not with a printing press but with lithography, using exotic chemicals and materials in a “fab” (a chip fabrication plant – the factory where chips are produced). Packing more transistors in each generation of chips requires the fab to “shrink” the size of the transistors. The first transistors were printed with lines 80 microns wide. Today Samsung and TSMC are pushing to produce chips with features few dozen nanometers across.That’s about a 2,000-to-1 reduction.

Each new generation of chips that shrinks the line widths requires fabs to invest enormous amounts of money in new chip-making equipment.  While the first fabs cost a few million dollars, current fabs – the ones that push the bleeding edge – are over $10 billion.

And the exploding cost of the fab is not the only issue with packing more transistors on chips. Each shrink of chip line widths requires more complexity. Features have to be precisely placed on exact locations on each layer of a device. At 7 nanometers this requires up to 80 separate mask layers.

Moore’s Law was an observation about process technology and economics. For half a century it drove the aspirations of the semiconductor industry. But the other limitation to packing more transistors onto to a chip is a physical limitation called Dennard scaling– as transistors get smaller, their power density stays constant, so that the power use stays in proportion with area. This basic law of physics has created a “Power Wall” – a barrier to clock speed – that has limited microprocessor frequency to around 4 GHz since 2005. It’s why clock speeds on your microprocessor stopped increasing with leaps and bounds 13 years ago.  And why memory density is not going to increase at the rate we saw a decade ago.

This problem of continuing to shrink transistors is so hard that even Intel, the leader in microprocessors and for decades the gold standard in leading fab technology, has had problems. Industry observers have suggested that Intel has hit several speed bumps on the way to their next generation push to 10- and 7-nanometer designs and now is trailing TSMC and Samsung.

This combination of spiraling fab cost, technology barriers, power density limits and diminishing returns is the reason GlobalFoundries threw in the towel on further shrinking line widths . It also means the future direction of innovation on silicon is no longer predictable.

It’s the End of the Beginning
The end of putting more transistors on a single chip doesn’t mean the end of innovation in computers or mobile devices. (To be clear, 1) the bleeding edge will advance, but almost imperceptibly year-to-year and 2) GlobalFoundaries isn’t shutting down, they’re just no longer going to be the ones pushing the edge 3) existing fabs can make current generation 14nm chips and their expensive tools have been paid for. Even older fabs at 28-, 45-, and 65nm can make a ton of money).

But what it does mean is that we’re at the end of guaranteed year-to-year growth in computing power. The result is the end of the type of innovation we’ve been used to for the last 60 years. Instead of just faster versions of what we’ve been used to seeing, device designers now need to get more creative with the 10 billion transistors they have to work with.

It’s worth remembering that human brains have had 100 billion neurons for at least the last 35,000 years. Yet we’ve learned to do a lot more with the same compute power. The same will hold true with semiconductors – we’re going to figure out radically new ways to use those 10 billion transistors.

For example, there are new chip architectures coming (multi-core CPUs, massively parallel CPUs and special purpose silicon for AI/machine learning and GPU’s like Nvidia), new ways to package the chips and to interconnect memory, and even new types of memory. And other designs are pushing for extreme low power usage and others for very low cost.

It’s a Whole New Game
So, what does this mean for consumers? First, high performance applications that needed very fast computing locally on your device will continue their move to the cloud (where data centers are measured in football field sizes) further enabled by new 5G networks. Second, while computing devices we buy will not be much faster on today’s off-the-shelf software, new features– facial recognition, augmented reality, autonomous navigation, and apps we haven’t even thought about –are going to come from new software using new technology like new displays and sensors.

The world of computing is moving into new and uncharted territory. For desktop and mobile devices, the need for a “must have” upgrade won’t be for speed, but because there’s a new capability or app.

For chip manufacturers, for the first time in half a century, all rules are off. There will be a new set of winners and losers in this transition. It will be exciting to watch and see what emerges from the fog.

Lessons Learned

  • Moore’s Law – the doubling of every two years of how many transistors can fit on a chip – has ended
  • Innovation will continue in new computer architectures, chip packaging, interconnects, and memory
  • 5G networks will move more high-performance consumer computing needs seamlessly to the cloud
  • New applications and hardware other than CPU speed (5G networks, displays, sensors) will now drive sales of consumer devices
  • New winners and losers will emerge in consumer devices and chip suppliers

The Difference Between Innovators and Entrepreneurs

I just received a thank-you note from a student who attended a fireside chat I held at the ranch. Something I said seemed to inspire her:

“I always thought you needed to be innovative, original to be an entrepreneur. Now I have a different perception. Entrepreneurs are the ones that make things happen. (That) takes focus, diligence, discipline, flexibility and perseverance. They can take an innovative idea and make it impactful. … successful entrepreneurs are also ones who take challenges in stride, adapt and adjust plans to accommodate whatever problems do come up.”


Over the last decade I’ve watched hundreds of my engineering students as well as ~1,500 of the country’s best scientists in the National Science Foundation Innovation Corps, cycle through the latest trends in startups: social media, new materials, big data, medical devices, diagnostics, digital health, therapeutics, drones, robotics, bitcoin, machine learning, etc.  Some of these world-class innovators get recruited by large companies like professional athletes, with paychecks to match. Others join startups to strike out on their own. But what I’ve noticed is that it’s rare that the smartest technical innovator is the most successful entrepreneur.

Being a domain expert in a technology field rarely makes you competent in commerce. Building a company takes very different skills than building a neural net in Python or decentralized blockchain apps in Ethereum.

Nothing makes me happier than to see my students getting great grades (and as they can tell you, I make them very work hard for them). But I remind them that customers don’t ask for your transcript. Until we start giving grades for resiliency, curiosity, agility, resourcefulness, pattern recognition, tenacity and having a passion for products and customers, great grades and successful entrepreneurs have at best a zero correlation (and anecdotal evidence suggests that the correlation may actually be negative.)

Most great technology startups – Oracle, Microsoft, Apple, Amazon, Tesla – were built by a team led by an entrepreneur.

It doesn’t mean that if you have technical skills you can’t build a successful company. It does mean that success in building a company that scales depends on finding product/market fit, enough customers, enough financing, enough great employees, distribution channels, etc. These are entrepreneurial skills you need to rapidly acquire or find a co-founder who already has them.

Lessons Learned

  • Entrepreneurship is a calling, not a job.
  • A calling is something you feel you need to follow, it gives you direction and purpose but no guarantee of a paycheck.
  • It’s what allows you to create a missionary zeal to recruit others, get customers to buy into a vision and gets VC’s to finance a set of slides.
  • It’s what makes you get up and do it again when customers say no, when investors laugh at your idea or when your rocket fails to make it to space.

Tesla Lost $700 Million Last Year, So Why Is Tesla’s Valuation $60 Billion?

Automobile manufacturers shipped 88 million cars in 2016. Tesla shipped 76,000. Yet Wall Street values Tesla higher than any other U.S. car manufacturer. What explains this more than 1,000 to 1 discrepancy in valuation?

The future.

Too many people compare Tesla to what already exists and that’s a mistake. Tesla is not another car company.

At the turn of the 20th century most people compared existing buggy and carriage manufacturers to the new automobile companies. They were both transportation, and they looked vaguely similar, with the only apparent difference that one was moved by horses attached to the front while the other had an unreliable and very noisy internal combustion engine.

They were different. And one is now only found in museums. Companies with business models built around internal combustion engines disrupted those built around horses.  That’s the likely outcome for every one of today’s automobile manufacturers. Tesla is a new form of transportation disrupting the incumbents.

Here are four reasons why.

Electric cars pollute less, have fewer moving parts, are quieter and faster than existing cars. Today, the technology necessary (affordable batteries with sufficient range) for them to be a viable business have all just come together. Most observers agree that autonomous electric cars will be the dominate form of transportation by mid-century. That’s bad news for existing car companies.

First, car companies have over a century of expertise in designing and building efficient mechanical propulsion systems – internal combustion engines for motive power and transmissions to drive the wheels. If existing car manufacturers want to build electric vehicles, all those design skills and most of the supply chain and manufacturing expertise are useless. And not only useless but they become this legacy of capital equipment and headcount that is now a burden to a company. In a few years, the only thing useful in existing factories building traditional cars will be the walls and roof.

Second, while the automotive industry might be 1000 times larger than Tesla, Tesla may actually have more expertise and dollars committed to the electric car ecosystem than any legacy car company. Tesla’s investment in Lithium/Ion battery factory (the Gigafactory), its electric drive train design and manufacturing output exceed the sum of the entire automotive industry.

Third, the future of transportation is not only electric, it’s autonomous and connected. A lot has been written about self-driving cars and as a reminder, automated driving comes in multiple levels:

  • Level 0: the car gives you warnings but driver maintains control of the car. For example, blind spot warning.
  • Level 1: the driver and the car share control. For example, Adaptive Cruise Control (ACC) where the driver controls steering and the automated system controls speed.
  • Level 2: The automated system takes full control of the vehicle (accelerating, braking, and steering). The driver monitors and intervenes if the automated system fails to respond.
  • Level 3: The driver can text or watch a movie. The vehicle will handle situations that call for an immediate response, like emergency braking. The driver must be prepared to intervene within some limited time, when called upon by the vehicle.
  • Level 4: No driver attention is ever required for safety, i.e. the driver may safely go to sleep or leave the driver’s seat.
  • Level 5: No human intervention is required. For example, a robotic taxi

Each level of autonomy requires an exponential amount of software engineering design and innovation. While cars have had an ever-increasing amount of software content, the next generation of transportation are literally computers on wheels. Much like in electric vehicle drive trains, autonomy and connectivity are not core competencies of existing car companies.

Fourth, large, existing companies are executing a known business model and have built processes, procedures and key performance indicators to measure progress to a known set of goals. But when technology disruption happens (electric drive trains, autonomous vehicles, etc.) changing a business model is extremely difficult. Very few companies manage to make the transition from one business model to another.

And while Tesla might be the first mover in disrupting transportation there is no guarantee they will be the ultimate leader. However, the question shouldn’t be why Tesla has such a high valuation.

The question should be why the existing automobile companies aren’t valued like horse and buggy companies.

Lesson Learned

  • Few market leaders in an industry being disrupted make the transition to the new industry
  • The assets, expertise, and mindset that made them leaders in the past are usually the baggage that prevents them from seeing the future

Innovation, Change and the Rest of Your Life

I gave the Alumni Day talk at U.C. Santa Cruz and had a few things to say about innovation.

—-

Even though I live just up the coast, I’ve never had the opportunity to start a talk by saying “Go Banana Slugs.”

I’m honored for the opportunity to speak here today.

We’re standing 15 air miles away from the epicenter of technology innovation. The home of some of the most valuable and fastest growing companies in the world.

I’ve spent my life in innovation, eight startups in 21 years, and the last 15 years in academia teaching it.

I lived through the time when working in my first job in Ann Arbor Michigan we had to get out a map to find out that San Jose was not only in Puerto Rico but there was a city with that same name in California.  And that’s where my plane ticket ought to take me to install some computer equipment.

39 years ago I got on that plane and never went back.

I’ve seen the Valley grow from Sunnyvale to Santa Clara to today where it stretches from San Jose to South of Market in San Francisco.  I’ve watched the Valley go from Microwave Valley – to Defense Valley – to Silicon Valley to Internet Valley. And to today, when its major product is simply innovation.  And I’ve been lucky enough to watch innovation happen not only in hardware and software but in Life Sciences – in Therapeutics, Medical Devices, Diagnostics and now Digital Health.

I’ve been asked to talk today about the future of Innovation – typically that involves giving you a list of hot technologies to pay attention to – technologies like machine learning.  The applications that will pour of this just one technology will transform every industry – from autonomous vehicles to automated radiology/oncology diagnostics.

Equally transformative on the life science side, CRISPR and CAS enable rapid editing of the genome, and that will change life sciences as radically as machine intelligence.

But today’s talk about the future of innovation is not about these technologies, or the applications or the new industries they will spawn.

In fact, it’s not about any specific new technologies.

The future of innovation is really about seven changes that have made innovation itself possible in a way that never existed before.

We’ve created a world where innovation is not just each hot new technology, but a perpetual motion machine.

So how did this happen?  Where is it going?

Silicon Valley emerged by the serendipitous intersection of:

  • Cold War research in microwaves and electronics at Stanford University,
  • a Stanford Dean of Engineering who encouraged startup culture over pure academic research,
  • Cold War military and intelligence funding driving microwave and military products for the defense industry in the 1950’s,
  • a single Bell Labs researcher deciding to start his semiconductor company next to Stanford in the 1950’s which led to
  • the wave of semiconductor startups in the 1960’s/70’s,
  • the emergence of Venture Capital as a professional industry,
  • the personal computer revolution in 1980’s,
  • the rise of the Internet in the 1990’s and finally
  • the wave of internet commerce applications in the first decade of the 21st century.
  • The flood of risk capital into startups at a size and scale that was not only unimaginable at its start, but in the middle of the 20th century would have seemed laughable.

Up until the beginning of this century, the pattern for the Valley seemed to be clear. Each new wave of innovation – microwaves, defense, silicon, disk drives, PCs, Internet, therapeutics, – was like punctuated equilibrium – just when you thought the wave had run its course into stasis, there emerged a sudden shift and radical change into a new family of technology. 

But in the 20th Century there were barriers to Entrepreneurship
In the last century, while startups continued to innovate in each new wave of technology, the rate of innovation was constrained by limitations we only now can understand. Startups in the past were constrained by:

  1. customers were initially the government and large companies and they adopted technology slowly,
  2. long technology development cycles (how long it takes to get from idea to product),
  3. disposable founders,
  4. the high cost of getting to first customers (how many dollars to build the product),
  5. the structure of the Venture Capital industry (there were a limited number of VC firms each needing to invest millions per startups),
  6. the failure rate of new ventures (startups had no formal rules and acted like smaller versions of large companies),
  7. the information and expertise about how to build startups (information was clustered in specific regions like Silicon Valley, Boston, New York, etc.), and there were no books, blogs or YouTube videos about entrepreneurship.

What we’re now seeing is The Democratization of Entrepreneurship
What’s happening today is something more profound than a change in technology. What’s happening is that these seven limits to startups and innovation have been removed.

The first thing that’s changed is that Consumer Internet and Genomics are Driving Innovation at scale
In the 1950’s and ‘60’s U.S. Defense and Intelligence organizations drove the pace of innovation in Silicon Valley by providing research and development dollars to universities, and defense companies built weapons systems that used the Valley’s first microwave devices and semiconductor components.

In the 1970’s, 80’s and 90’s, momentum shifted to the enterprise as large businesses supported innovation in PCs, communications hardware and enterprise software. Government and the enterprise are now followers rather than leaders.

Today, for hardware and software it’s consumers – specifically consumer Internet companies – that are the drivers of innovation. When the product and channel are bits, adoption by 10’s and 100’s of millions and even billions of users can happen in years versus decades.

For life sciences it was the Genentech IPO in 1980 that proved to investors that life science startups could make them a ton of money.

The second thing that’s changed is that we’re now Compressing the Product Development Cycle
In the 20th century startups I was part of, the time to build a first product release was measured in years as we turned out the founder’s vision of what customers wanted. This meant building every possible feature the founding team envisioned into a monolithic “release” of the product.

Yet time after time, after the product shipped, startups would find that customers didn’t use or want most of the features. The founders were simply wrong about their assumptions about customer needs. It turns out the term “visionary founder” was usually a synonym for someone who was hallucinating. The effort that went into making all those unused features was wasted.

Today startups build products differently. Instead of building the maximum number of features, founders treat their vision as a series of untested hypotheses, then get out of the building and test a minimum feature set in the shortest period of time.  This lets them deliver a series of minimal viable products to customers in a fraction of the time.

For products that are simply “bits” delivered over the web, a first product can be shipped in weeks rather than years.

The third thing is that Founders Need to Run the Company Longer
Today, we take for granted new mobile apps and consumer devices appearing seemingly overnight, reaching tens of millions of users – and just as quickly falling out of favor. But in the 20th century, dominated by hardware, software, and life sciences, technology swings inside an existing market happened slowly — taking years, not months. And while new markets were created (i.e. the desktop PC market), they were relatively infrequent.

This meant that disposing of the founder, and the startup culture responsible for the initial innovation, didn’t hurt a company’s short-term or even mid-term prospects.  So, almost like clockwork 20th century startups fired the innovators/founders when they scaled. A company could go public on its initial wave of innovation, then coast on its current technology for years. In this business environment, hiring a new CEO who had experience growing a company around a single technical innovation was a rational decision for venture investors.

That’s no longer the case.

The pace of technology change in the second decade of the 21st century is relentless. It’s hard to think of a hardware/software or life science technology that dominates its space for years. That means new companies face continuous disruption before their investors can cash out.

To stay in business in the 21st century, startups must do three things their 20th century counterparts didn’t:

  • A company is no longer built on a single innovation. It needs to be continuously innovating – and who best to do that? The founders.
  • To continually innovate, companies need to operate at startup speed and cycle time much longer their 20th century counterparts did. This requires retaining a startup culture for years – and who best to do that? The founders.
  • Continuous innovation requires the imagination and courage to challenge the initial hypotheses of your current business model (channel, cost, customers, products, supply chain, etc.) This might mean competing with and if necessary killing your own products. (Think of the relentless cycle of iPod then iPhone innovation.) Professional CEOs who excel at growing existing businesses find this extremely hard.  Who best to do that? The founders.

The fourth thing that’s changed is that you can start a company on your laptop For Thousands Rather than Millions of Dollars
Startups traditionally required millions of dollars of funding just to get their first product to customers. A company developing software would have to buy computers and license software from other companies and hire the staff to run and maintain it. A hardware startup had to spend money building prototypes and equipping a factory to manufacture the product.

Today open source software has slashed the cost of software development from millions of dollars to thousands. My students think of computing power as a utility like I think of electricity. They can get to more computing power via their laptop through Amazon Web Services than existed in the entire world when I started in Silicon Valley.

And for consumer hardware, no startup has to build their own factory as the costs are absorbed by offshore manufacturers.  China has simply become the factory.

The cost of getting the first product out the door for an Internet commerce startup has dropped by a factor of a 100 or more in the last decade.  Ironically, while the cost of getting the first product out the door has plummeted, it now can take 10’s or 100’s of millions of dollars to scale.

The fifth change is the New Structure of how startups get funded
The plummeting cost of getting a first product to market (particularly for Internet startups) has shaken up the Venture Capital industry.

Venture Capital used to be a tight club clustered around formal firms located in Silicon Valley, Boston, and New York. While those firms are still there (and getting larger), the pool of money that invests risk capital in startups has expanded, and a new class of investors has emerged.

First, Venture Capital and angel investing is no longer a U.S. or Euro-centric phenomenon. Risk capital has emerged in China, India and other countries where risk taking, innovation and liquidity are encouraged, on a scale previously only seen in the U.S.

Second, new groups of VCs, super angels, smaller than the traditional multi-hundred-million-dollar VC fund, can make small investments necessary to get a consumer Internet startup launched. These angels make lots of early bets and double-down when early results appear. (And the results do appear years earlier than in a traditional startup.)

Third, venture capital has now become Founder-friendly.

A 20th century VC was likely to have an MBA or finance background. A few, like John Doerr at Kleiner Perkins and Don Valentine at Sequoia, had operating experience in a large tech company. But out of the dot-com rubble at the turn of the 21st century, new VCs entered the game – this time with startup experience. The watershed moment was in 2009 when the co-founder of Netscape, Marc Andreessen, formed a venture firm and started to invest in founders with the goal to teach them how to be CEOs for the long term. Andreessen realized that the game had changed. Continuous innovation was here to stay and only founders – not hired execs – could play and win.  Founder-friendly became a competitive advantage for his firm Andreessen Horowitz. In a seller’s market, other VCs adopted this “invest in the founder” strategy.

Fourth, in the last decade, corporate investors and hedge funds have jumped into later stage investing with a passion. Their need to get into high-profile deals has driven late-stage valuations into unicorn territory.  A unicorn is a startup with a market capitalization north of a billion dollars.

What this means is that the emergence of incubators and super angels have dramatically expanded the sources of seed capital. VCs have now ceded more control to founders. Corporate investors and hedge funds have dramatically expanded the amount of money available. And the globalization of entrepreneurship means the worldwide pool of potential startups has increased at least 100-fold since the turn of this century.  And today there are over 200 startups worth over a billion dollars.

Change Number 6 is that Starting a Company means you no longer Act Like A Big Company
Since the turn of the century, there’s been a radical shift in how startups thought of themselves.  Until then investors and entrepreneurs acted like startups were simply smaller versions of large companies. Everything a large company did, a startup should do – write a business plan; hire sales, marketing, engineering; spec all the product features on day one and build everything for a big first customer ship.

We now understand that’s wrong.  Not kind of wrong but going out of business wrong.

What used to happen is you’d build the product, have a great launch event, everyone high-five the VP of Marketing for great press and then at the first board meeting ask the VP of Sales how he was doing versus the sales plan.  The response was inevitably “great pipeline.”  (Great pipeline means no real sales.)

This would continue for months, as customers weren’t behaving as per the business plan.  Meanwhile every other department in the company would be making their plan – meaning the company was burning cash without bringing in revenue.  Finally the board would fire the VP of sales.  This cycle would continue then you’d fire the VP of Marketing, then the CEO.

What we’ve learned is that while companies execute business models, startups search for a business model. It means that unlike in big companies startups are guessing about who their customers are, what features they want, where and how they want to buy the product, how much they want to pay.  We now understand that startups are just temporary organizations designed to search for a scalable and repeatable business models.

We now have specific management tools to grow startups. Entrepreneurs first map their assumptions and then test these hypotheses with customers out in the field (customer development) and use an iterative and incremental development methodology (agile development) to build the product. When founders discover their assumptions are wrong, as they inevitably will, the result isn’t a crisis, it’s a learning event called a pivot — and an opportunity to change the business model.

The result, startups now have tools that speed up the search for customers, reduce time to market and slash the cost of development. I’m glad to have been part of the team inventing the Lean Startup methodology.

Change number 7 – the last one – is perhaps the most profound and one students graduating today don’t even recognize. And it’s that Information is everywhere

In the 20th century learning the best practices of a startup CEO was limited by your coffee bandwidth. That is, you learned best practices from your board and by having coffee with other, more experienced CEOs. Today, every founder can read all there is to know about running a startup online. Incubators and accelerators like Y-Combinator have institutionalized experiential training in best practices (product/market fit, pivots, agile development, etc.); provide experienced and hands-on mentorship; and offer a growing network of founding CEOs.

The result is that today’s CEOs have exponentially more information than their predecessors. This is ironically part of the problem. Reading about, hearing about and learning about how to build a successful company is not the same as having done it. As we’ll see, information does not mean experience, maturity or wisdom. 

The Entrepreneurial Singularity
The barriers to entrepreneurship are not just being removed. In each case, they’re being replaced by innovations that are speeding up each step, some by a factor of ten.

And while innovation is moving at Internet speed, it’s not limited to just Internet commerce startups. It has spread to the enterprise and ultimately every other business segment. We’re seeing the effect of Amazon on retailers.  Malls are shutting down. Most students graduating today have no idea what a Blockbuster record/video store was. Many have never gotten their news from a physical newspaper.

If we are at the cusp of a revolution as important as the scientific and industrial revolutions what does it mean? Revolutions are not obvious when they happen. When James Watt started the industrial revolution with the steam engine in 1775 no one said, “This is the day everything changes.”  When Karl Benz drove around Mannheim in 1885, no one said, “There will be 500 million of these driving around in a century.” And certainly in 1958 when Noyce and Kilby invented the integrated circuit, the idea of a quintillion (10 to the 18th) transistors being produced each year seemed ludicrous.

It’s possible that we’ll look back to this decade as the beginning of our own revolution. We may remember this as the time when scientific discoveries and technological breakthroughs were integrated into the fabric of society faster than they had ever been before. When the speed of how businesses operated changed forever.

As the time when we reinvented the American economy and our Gross Domestic Product began to take off and the U.S. and the world reached a level of wealth never seen before.  It may be the dawn of a new era for a new American economy built on entrepreneurship and innovation.

Why the Navy Needs Disruption Now (part 2 of 2)

The future is here it’s just distributed unevenly – Silicon Valley view of tech adoption

The threat is here it’s just distributed unevenly – A2/AD and the aircraft carrier

This is the second of a two-part post following my stay on the aircraft carrier USS Carl Vinson. Part 1 talked about what I saw and learned – the layout of a carrier, how the air crew operates and how the carrier functions in context of the other ships around it (the strike group.) But the biggest learning was the realization that disruption is not just happening to companies, it’s also happening to the Navy. And that the Lean Innovation tools we’ve built to deal with disruption and create continuous innovation for large commercial organizations were equally relevant here.

This post offers a few days’ worth of thinking about what I saw. (If you haven’t, read part 1 first.)


The threat is here; it’s just distributed unevenly – A2/AD and the aircraft carrier
Both of the following statements are true:

  • The aircraft carrier is viable for another 30 years.
  • The aircraft carrier is obsolete.

Well-defended targets
Think of an aircraft carrier as a $11 billion dollar portable air force base manned by 5,000 people delivering 44 F/A-18 strike fighters anywhere in the world.

The primary roles of the 44 F/A-18 strike fighters that form the core of the carrier’s air wing is to control the air and drop bombs on enemy targets. For targets over uncontested airspace (Iraq, Afghanistan, Syria, Somalia, Yemen, Libya, etc.) that’s pretty easy. The problem is that First World countries have developed formidable surface-to-air missiles – the Russian S–300 and S-400 and the Chinese HQ-9 – which have become extremely effective at shooting down aircraft. And they have been selling these systems to other countries (Iran, Syria, Egypt, etc.). While the role of an aircraft carrier’s EA-18G Growlers is to jam/confuse the radar of these missiles, the sophistication and range of these surface-to-air missiles have been evolving faster than the jamming countermeasures on the EA-18G Growlers (and the cyber hacks to shut the radars down).

Hq9

This means that the odds of a carrier-based F/A-18 strike fighter successfully reaching a target defended by these modern surface-to-air missiles is diminishing yearly. Unless the U.S. military can take these air defense systems out with drones, cruise missiles or cyber attack, brave and skilled pilots may not be enough. Given the F/A-18’s are manned aircraft (versus drones), high losses of pilots may be (politically) unacceptable.

Vulnerable carriers
If you want to kill a carrier, first you must find it and then you have to track it. In WWII knowing where the enemy fleet located was a big – and critical – question. Today, photo imaging satellites, satellites that track electronic emissions (radio, radar, etc.) and satellites with synthetic aperture radar that can see through clouds and at night are able to pinpoint the strike group and carrier 24/7. In the 20th century only the Soviet Union had this capability. Today, China can do this in the Pacific and to a limited extent, Iran has this capability in the Persian Gulf. Soon there will be enough commercial satellite coverage of the Earth using the same sensors, that virtually anyone able to pay for the data will be able to track the ships.

During the Cold War the primary threat to carriers was from the air – from strike/fighters dropping bombs/torpedoes or from cruise missiles (launched from ships and planes). While the Soviets had attack submarines, our anti-Submarine Warfare (ASW) capabilities (along with very noisy Soviet subs pre-Walker spy ring) made subs a secondary threat to carriers.

In the 20th century the war plan for a carrier strike group used its fighter and attack aircraft and Tomahawk cruise missiles launched from the cruisers to destroy enemy radar, surface-to-air missiles, aircraft and communications (including satellite downlinks). As those threats are eliminated, the carrier strike can move closer to land without fear of attack. This allowed the attack aircraft to loiter longer over targets or extend their reach over enemy territory.

Carriers were designed to be most effective launching a high number of sorties (number of flights) from ~225 miles from the target. For example, we can cruise offshore of potential adversaries (Iraq and Syria) who can’t get to our carriers. (Carriers can standoff farther or can reach further inland, but they have to launch F-18’s as refueling tankers to extend the mission range. For example, missions into Afghanistan are 6-8 hours versus normal mission times of 2-3 hours.)

In the 21st century carrier strike groups are confronting better equipped adversaries, and today carriers face multiple threats before they can launch an initial strike. These threats include much quieter submarines, long-range, sea-skimming cruise missiles, and in the Pacific, a potential disruptive game changer – ballistic missiles armed with non-nuclear maneuverable warheads that can hit a carrier deck as it maneuvers at speed (DF-21d and the longer range DF-26).d21d range

In the Persian Gulf the carriers face another threat – Fast Inshore Attack Craft (FIAC) and speedboats with anti-ship cruise missiles that can be launched from shore.

The sum of all these threats – to the carrier-based aircraft and the carriers themselves –  are called anti-access/area denial (A2/AD) capabilities.

Eventually the cost and probability of defending the carrier as a manned aircraft platform becomes untenable in highly defended A2/AD environments like the western Pacific or the Persian Gulf. (This seems to be exactly the problem the manned bomber folks are facing in multiple regions.) But if not a carrier, what will they use to project power?  While the carrier might become obsolete, the mission certainly has not.

So how does/should the Navy solve these problems?

Three Horizons of Innovation
One useful way to think about in innovation in the face of increasing disruption / competition is called the “Three Horizons of Innovation.” It suggests that an organization should think about innovation across three categories called “Horizons.”

  • Horizon 1 activities support executing the existing mission with ever increasing efficiency
  • Horizon 2 is focused on extending the core mission
  • Horizon 3 is focused on searching for and creating brand new missions
    (see here for background on the Three Horizons.)

Horizon 1 is the Navy’s core mission. Here the Navy executes against a set of known mission requirements (known beneficiaries, known ships and planes, known adversaries, deployment, supply chain, etc.) It uses existing capabilities and has comparatively low risk to get the next improvement out the door.

In a well-run organization like the Navy, innovation and improvement occurs continuously in Horizon 1. Branches of the Navy innovate on new equipment, new tactics, new procurement processes, more sorties on newer carriers, etc. As fighter pilots want more capable manned aircraft and carrier captains want better carriers, it’s not a surprise that Horizon 1 innovations are upgrades – the next generation of carriers – Ford Class; and next generation of navy aircraft – the F-35C. As a failure here can impact the Navy’s current mission, Horizon 1 uses traditional product management tools to minimize risk and assure execution. (And yes, like any complex project they still manage to be over budget and miss their delivery schedule.)

Because failure here is unacceptable, Navy Horizon 1 programs and people are managed by building repeatable and scalable processes, procedures, incentives and promotions to execute and the mission.

In Horizon 2, the Navy extends its core mission. Here it looks for new opportunities within its existing mission (trying new technology on the same platform, using the same technology with new missions, etc.) Horizon 2 uses mostly existing capabilities (the carrier as an aircraft platform, aircraft to deliver munitions) and has moderate risk in building or securing new capabilities to get the product out the door.

An example of potential Naval Horizon 2 innovations is unmanned drones flying off carriers to do the jobs fighter pilots hate such as serving as airborne tankers (who wants to fly a gas tank around for 6 hours?) and ISR (Intelligence, Surveillance and Reconnaissance), another tedious mission flying around for hours that could be better solved with a drone downlinking ISR data for processing on board a ship.

However, getting the tanker and ISR functions onto drones only delays the inevitable shift to drones for strike, and then for fighters. The problem of strike fighters’ increasing difficulty in penetrating heavily defended targets isn’t going to get better with the new F-35C (the replacement for the F/A-18). In fact, it will get worse. Regardless of the bravery and skill of the pilots, they will face air defense systems evolving at a faster rate than the defensive systems on the aircraft. It’s not at all clear in a low-intensity conflict (think Bosnia or Syria) that civilian leadership will want to risk captured or killed pilots and losing planes like the F-35C that cost several hundred million dollars each.

Management in Horizon 2 works by pattern recognition and experimentation inside the current mission model. Ironically, institutional inertia keeps the Navy from deploying unmanned assets on carriers. In a perfect world, drones in carrier tanker and ISR roles should have been deployed by the beginning of this decade. And by now experience with them on a carrier deck could have led to first, autonomous wingmen and eventually autonomous missions. Instead the system appears to have fallen into the “real men fly planes and command Air Wings and get promoted by others who do” mindset.

The Navy does not lack drone demos and prototypes, but it has failed to deploy Horizon 2 innovations with speed and urgency. Failure to act aggressively here will impact the Navy’s ability to carry out its mission of sea control and power projection. (The Hudson Institute report on the future of the carrier is worth a read, and a RAND report on the same topic comes out in October.)

If you think Horizon 2 innovation is hard in the Navy, wait until you get to Horizon 3. This is where disruption happens. It’s how the aircraft carrier disrupted the battleship. How nuclear-powered ballistic missile submarines changed the nature of strategic deterrence, and how the DF-21/26 and artificial islands in the South China sea changed decades of assumptions.  And it’s why, in most organizations, innovation dies.

For the Navy, a Horizon 3 conversation would not be about better carriers and aircraft. Instead it would focus on the core reasons the Navy deploys a carrier strike group: to show the flag for deterrence, or to control part of the sea to protect shipping, or to protect a Marine amphibious force, or to project offensive power against any adversary in well-defended areas.

A Horizon 3 solution for the Navy would start with basic need of these missions (sea control, offensive power projection – sortie generation) the logistic requirements that come with them, and the barriers to their success like A2/AD threats. Lots of people have been talking and writing about this and lots of Horizon 3 concepts have been proposed such as Distributed LethalityArsenal Ships, underwater drone platforms, etc.

Focussing on these goals – not building or commanding carriers, or building and flying planes – is really, really hard.  It’s hard to get existing operational organizations to think about disruption because it means they have to be thinking about obsoleting a job, function or skill they’ve spent their lives perfecting. It’s hard because any large organization is led by people who succeeded as Horizon 1 and 2 managers and operators (not researchers). Their whole focus, career, incentives, etc. has been about building and make the current platforms work. And the Navy has excelled in doing so.

The problem is that Horizon 3 solutions take different people, different portfolio, different process and different politics.

People: In Horizon 1 and 2 programs people who fail don’t get promoted because in a known process failure to execute is a failure of individual performance. However, applying the same rules to Horizon 3 programs – no failures tolerated – means we’ll have no learning and no disruptive innovations. What spooks leadership is that in Horizon 3 most of the projects will fail. But using Lean Innovation they’ll fail quickly and cheaply.

In Horizon 3 the initial program is run by mavericks – the crazy innovators. In the Navy, these are the people you want to court martial or pass over for promotion for not getting with current program. (In a startup they’d be the founding CEO.) These are the fearless innovators you want to create new and potentially disruptive mission models. Failure to support their potential disruptive talent means it will go elsewhere.

Portfolio: In Horizon 3, the Navy is essentially incubating a startup. And not just one. The Navy needs a portfolio of Horizon 3 bets, for the same reason venture capital and large companies have a portfolio of Horizon 3 bets – most of these bets will fail – but the ones that succeed are game changers.

Process: A critical difference between a Horizon 3 bet and a Horizon 1 or 2 bet is that you don’t build large, expensive, multi-year programs to test radically new concepts (think of the Zumwalt class destroyers). You use “Lean” techniques to build Minimal Viable Products (MVPs). MVPs are whatever it takes to get you the most learning in the shortest period of time.

Horizon 3 groups operate with speed and urgency – the goal is rapid learning. They need to be physically separate from operating divisions in an incubator, or their own facility. And they need their own plans, procedures, policies, incentives and Key Performance Indicators (KPIs) different from those in Horizon 1.  

The watchwords in Horizon 3 are “If everything seems under control, you’re just not going fast enough.”

Politics: In Silicon Valley most startups fail. That’s why we invest in a portfolio of new ideas, not just one. We embrace failure as an integral part of learning. We do so by realizing that in Horizon 3 we are testing hypotheses – a series of unknowns – not executing knowns. Yet failure/learning is a dirty word in the world of promotions and the “gotcha game” of politics. To survive in this environment Horizon 3 leaders must learn how to communicate up/down and sideways that they are not running Horizon 1 and 2 projects.

Meanwhile, Navy and DOD leadership has to invest in, and clearly communicate their innovation strategy across all three Horizons.

Failure to manage innovation across all three Horizons and failure to make a portfolio of Horizon 3 bets means that the Navy is exposed to disruption by new entrants. Entrants unencumbered by decades of success, fueled by their own version of manifest destiny.

Lessons Learned

  • Our carriers are a work of art run and manned by professionals
    • Threats that can degrade or negate a carrier strike group exist in multiple areas
    • However, carriers are still a significant asset in almost all other combat scenarios
  • Speed and urgency rather than institutional inertia should be the watchwords for Horizon 2 innovation
  • Horizon 3 innovation is about a clean sheet of paper thinking
    • It’s what Silicon Valley calls disruption
    • It requires different people, portfolio, process and politics
  • The Navy (and DOD) must manage innovation across all three Horizons
    • Allocating dollars and resources for each
  • Remembering that todays Horizon 3 crazy idea is tomorrow Horizon 1 platform

Thanks to the crew of the U.S.S. Vinson, and Commander Todd Cimicata and Stanford for a real education about the Navy.

%d bloggers like this: