The Quantum Technology Ecosystem – Explained

If you think you understand quantum mechanics,
you don’t understand quantum mechanics

Richard Feynman

IBM Quantum Computer

Tens of billions of public and private capital are being invested in Quantum technologies. Countries across the world have realized that quantum technologies can be a major disruptor of existing businesses and change the balance of military power. So much so, that they have collectively invested ~$24 billion in in quantum research and applications.

At the same time, a week doesn’t go by without another story about a quantum technology milestone or another quantum company getting funded. Quantum has moved out of the lab and is now the focus of commercial companies and investors. In 2021 venture capital funds invested over $2 billion in 90+ Quantum technology companies. Over a $1 billion of it going to Quantum computing companies. In the last six months quantum computing companies IonQ, D-Wave and Rigetti went public at valuations close to a billion and half dollars. Pretty amazing for computers that won’t be any better than existing systems for at least another decade – or more.  So why the excitement about quantum?

The Quantum Market Opportunity

While most of the IPOs have been in Quantum Computing, Quantum technologies are used in three very different and distinct markets: Quantum Computing, Quantum Communications and Quantum Sensing and Metrology.

All of three of these markets have the potential for being disruptive. In time Quantum computing could obsolete existing cryptography systems, but viable commercial applications are still speculative. Quantum communications could allow secure networking but are not a viable near-term business. Quantum sensors could create new types of medical devices, as well as new classes of military applications, but are still far from a scalable business.

It’s a pretty safe bet that 1) the largest commercial applications of quantum technologies won’t be the ones these companies currently think they’re going to be, and 2) defense applications using quantum technologies will come first. 3) if and when they do show up they’ll destroy existing businesses and create new ones.

We’ll describe each of these market segments in detail. But first a description of some quantum concepts.

Key Quantum Concepts

Skip this section if all you want to know is that 1) quantum works, 2) yes, it is magic.

Quantum  – The word “Quantum” refers to quantum mechanics which explains the behavior and properties of atomic or subatomic particles, such as electrons, neutrinos, and photons.

Superposition – quantum particles exist in many possible states at the same time. So a particle is described as a “superposition” of all those possible states. They fluctuate until observed and measured. Superposition underpins a number of potential quantum computing applications.

Entanglement – is what Einstein called “spooky action at a distance.” Two or more quantum objects can be linked so that measurement of one dictates the outcomes for the other, regardless of how far apart they are. Entanglement underpins a number of potential quantum communications applications.

Observation – Superposition and entanglement only exist as long as quantum particles are not observed or measured. If you observe the quantum state you can get information, but it results in the collapse of the quantum system.

Qubit – is short for a quantum bit. It is a quantum computing element that leverages the principle of superposition to encode information via one of four methods: spin, trapped atoms and ions, photons, or superconducting circuits.

Quantum Computers – Background

Quantum computers are a really cool idea. They harness the unique behavior of quantum physics—such as superposition, entanglement, and quantum interference—and apply it to computing.

In a classical computer transistors can represent two states – either a 0 or 1. Instead of transistors Quantum computers use quantum bits (called qubits.) Qubits exist in superposition – both in 0 and 1 state simultaneously.

Classic computers use transistors as the physical building blocks of logic. In quantum computers they may use trapped ions, superconducting loops, quantum dots or vacancies in a diamond. The jury is still out.

In a classic computer 2-14 transistors make up the seven basic logic gates (AND, OR, NAND, etc.) In a quantum computer building a single logical Qubit require a minimum of 9 but more likely 100’s or thousands of physical Qubits (to make up for error correction, stability, decoherence and fault tolerance.)

In a classical computer compute-power increases linearly with the number of transistors and clock speed. In a Quantum computer compute-power increases exponentially with the addition of each logical qubit.

But qubits have high error rates and need to be ultracold. In contrast classical computers have very low error rates and operate at room temperature.

Finally, classical computers are great for general purpose computing. But quantum computers can theoretically solve some complex algorithms/ problems exponentially faster than a classical computer. And with a sufficient number of logical Qubits they can become a Cryptographically Relevant Quantum Computer (CRQC).  And this is where Quantum computers become very interesting and relevant for both commercial and national security. (More below.)

Types of Quantum Computers

Quantum computers could potentially do things at speeds current computers cannot. Think of the difference of how fast you can count on your fingers versus how fast today’s computers can count. That’s the same order of magnitude speed-up a quantum computer could have over today’s computers for certain applications.

Quantum computers fall into four categories:

  1. Quantum Emulator/Simulator
  2. Quantum Annealer
  3. NISQ – Noisy Intermediate Scale Quantum
  4. Universal Quantum Computer – which can be a Cryptographically Relevant Quantum Computer (CRQC)

When you remove all the marketing hype, the only type that matters is #4 – a Universal Quantum Computer. And we’re at least a decade or more away from having those.

Quantum Emulator/Simulator
These are classical computers that you can buy today that simulate quantum algorithms. They make it easy to test and debug a quantum algorithm that someday may be able to run on a Universal Quantum Computer. Since they don’t use any quantum hardware they are no faster than standard computers.

Quantum Annealer is a special purpose quantum computer designed to only run combinatorial optimization problems, not general-purpose computing, or cryptography problems. D-Wave has defined and owned this space. While they have more physical Qubits than any other current system they are not organized as gate-based logical qubits. Currently this is a nascent commercial technology in search of a future viable market.

Noisy Intermediate-Scale Quantum (NISQ) computers. Think of these as prototypes of a Universal Quantum Computer – with several orders of magnitude fewer bits. (They currently have 50-100 qubits, limited gate depths, and short coherence times.) As they are short several orders of magnitude of Qubits, NISQ computers cannot perform any useful computation, however they are a necessary phase in the learning, especially to drive total system and software learning in parallel to the hardware development. Think of them as the training wheels for future universal quantum computers.

Universal Quantum Computers / Cryptographically Relevant Quantum Computers (CRQC)
This is the ultimate goal. If you could build a universal quantum computer with fault tolerance (i.e. millions of error corrected physical qubits resulting in thousands of logical Qubits), you could run quantum algorithms in cryptography, search and optimization, quantum systems simulations, and linear equations solvers. (See here for a list of hundreds quantum algorithms.) These all would dramatically outperform classical computation on large complex problems that grow exponentially as more variables are considered. Classical computers can’t attack these problems in reasonable times without so many approximations that the result is useless. We simply run out of time and transistors with classical computing on these problems. These special algorithms are what make quantum computers potentially valuable. For example, Grover’s algorithm solves the problem for the unstructured search of data. Further, quantum computers are very good at minimization / optimizations…think optimizing complex supply chains, energy states to form complex molecules, financial models, etc.

However, while all of these algorithms might have commercial potential one day, no one has yet to come up with a use for them that would radically transform any business or military application. Except for one – and that one keeps people awake at night.

It’s Shor’s algorithm for integer factorization – an algorithm that underlies much of existing public cryptography systems.

The security of today’s public key cryptography systems rests on the assumption that breaking into those with a thousand or more digits is practically impossible. It requires factoring into large prime numbers (e.g., RSA) or elliptic curve (e.g., ECDSA, ECDH) or finite fields (DSA) that can’t be done with any type of classic computer regardless of how large. Shor’s factorization algorithm can crack these codes if run on a Universal Quantum Computer. Uh-oh!

Impact of a Cryptographically Relevant Quantum Computer (CRQC) Skip this section if you don’t care about cryptography.

Not only would a Universal Quantum Computer running Shor’s algorithm make today’s public key algorithms (used for asymmetric key exchanges and digital signatures) useless, someone can implement a “harvest-now-and-decrypt-later” attack to record encrypted documents now with intent to decrypt them in the future. That means everything you send encrypted today will be able to be read retrospectively. Many applications – from ATMs to emails – would be vulnerable—unless we replace those algorithms with those that are “quantum-safe”.

When Will Current Cryptographic Systems Be Vulnerable?

The good news is that we’re nowhere near having any viable Cryptographically Relevant Quantum Computer, now or in the next few years. However, you can estimate when this will happen by calculating how many logical Qubits are needed to run Shor’s Algorthim and how long it will it take to break these crypto systems. There are lots of people tracking these numbers (see here and here). Their estimate is that using 8,194 logical qubits using 22.27 million physical qubits, it would take a quantum computer 20 minutes to break RSA-2048. The best estimate is that this might be possible in 8 to 20 years.

Post-Quantum / Quantum-Resistant Codes

That means if you want to protect the content you’re sending now, you need to migrate to new Post-Quantum /Quantum-Resistant Codes. But there are three things to consider in doing so:

  1. shelf-life time: the number of years the information must be protected by cyber-systems
  2. migration time: the number of years needed to properly and safely migrate the system to a quantum-safe solution
  3. threat timeline: the number of years before threat actors will be able to break the quantum-vulnerable systems

These new cryptographic systems would secure against both quantum and conventional computers and can interoperate with existing communication protocols and networks. The symmetric key algorithms of the Commercial National Security Algorithm (CNSA) Suite were selected to be secure for national security systems usage even if a CRQC is developed.

Cryptographic schemes that commercial industry believes are quantum-safe include lattice-based cryptography, hash trees, multivariate equations, and super-singular isogeny elliptic curves.

Estimates of when you can actually buy a fully error-corrected quantum computers vary from “never” to somewhere between 8 to 20 years from now. (Some optimists believe even earlier.)

Quantum Communication

Quantum communications quantum computers. A quantum network’s value comes from its ability to distribute entanglement. These communication devices manipulate the quantum properties of photons/particles of light to build Quantum Networks.

This market includes secure quantum key distribution, clock synchronization, random number generation and networking of quantum military sensors, computers, and other systems.

Quantum Cryptography/Quantum Key Distribution
Quantum Cryptography/Quantum Key Distribution can distribute keys between authorized partners connected by a quantum channel and a classical authenticated channel. It can be implemented via fiber optics or free space transmission. China transmitted entangled photons (at one pair of entangled particles per second) over 1,200 km in a satellite link, using the Micius satellite.

The Good: it can detect the presence of an eavesdropper, a feature not provided in standard cryptography. The Bad: Quantum Key Distribution can’t be implemented in software or as a service on a network and cannot be easily integrated into existing network equipment. It lacks flexibility for upgrades or security patches. Securing and validating Quantum Key Distribution is hard and it’s only one part of a cryptographic system.

The view from the National Security Agency (NSA) is that quantum-resistant (or post-quantum) cryptography is a more cost effective and easily maintained solution than quantum key distribution. NSA does not support the usage of QKD or QC to protect communications in National Security Systems. (See here.) They do not anticipate certifying or approving any Quantum Cryptography/Quantum Key Distribution security products for usage by National Security System customers unless these limitations are overcome. However, if you’re a commercial company these systems may be worth exploring.

Quantum Random Number Generators (GRGs)
Commercial Quantum Random Number Generators that use quantum effects (entanglement) to generate nondeterministic randomness are available today. (Government agencies can already make quality random numbers and don’t need these devices.)

Random number generators will remain secure even when a Cryptographically Relevant Quantum Computer is built.

Quantum Sensing and Metrology

Quantum sensors  Quantum computers.

This segment consists of Quantum Sensing (quantum magnetometers, gravimeters, …), Quantum Timing (precise time measurement and distribution), and Quantum Imaging (quantum radar, low-SNR imaging, …) Each of these areas can create entirely new commercial products or entire new industries e.g. new classes of medical devices and military systems, e.g. anti-submarine warfare, detecting stealth aircraft, finding hidden tunnels and weapons of mass destruction. Some of these are achievable in the near term.

Quantum Timing
First-generation quantum timing devices already exist as microwave atomic clocks. They are used in GPS satellites to triangulate accurate positioning. The Internet and computer networks use network time servers and the NTP protocol to receive the atomic clock time from either the GPS system or a radio transmission.

The next generation of quantum clocks are even more accurate and use laser-cooled single ions confined together in an electromagnetic ion trap. This increased accuracy is not only important for scientists attempting to measure dark matter and gravitational waves, but miniaturized/ more accurate atomic clocks will allow precision navigation in GPS- degraded/denied areas, e.g. in commercial and military aircraft, in tunnels and caves, etc.

Quantum Imaging
Quantum imaging is one of the most interesting and near-term applications. First generation magnetometers such as superconducting quantum interference devices (SQUIDs) already exist. New quantum sensor types of imaging devices use entangled light, accelerometers, magnetometers, electrometers, gravity sensors. These allow measurements of frequency, acceleration, rotation rates, electric and magnetic fields, photons, or temperature with levels of extreme sensitivity and accuracy.

These new sensors use a variety of quantum effects: electronic, magnetic, or vibrational states or spin qubits, neutral atoms, or trapped ions. Or they use quantum coherence to measure a physical quantity. Or use quantum entanglement to improve the sensitivity or precision of a measurement, beyond what is possible classically.

Quantum Imaging applications can have immediate uses in archeology,  and profound military applications. For example, submarine detection using quantum magnetometers or satellite gravimeters could make the ocean transparent. It would compromise the survivability of sea-based nuclear deterrent by detecting and tracking subs deep underwater.

Quantum sensors and quantum radar from companies like Rydberg can be game changers.

Gravimeters or quantum magnetometers could also detect concealed tunnels, bunkers, and nuclear materials. Magnetic resonance imaging could remotely ID chemical and biological agents. Quantum radar or LIDAR would enable extreme detection of electromagnetic emissions, enhancing ELINT and electronic warfare capabilities. It can use fewer emissions to get the same detection result, for better detection accuracy at the same power levels – even detecting stealth aircraft.

Finally, Ghost imaging uses the quantum properties of light to detect distant objects using very weak illumination beams that are difficult for the imaged target to detect. It can increase the accuracy and lessen the amount of radiation exposed to a patient during x-rays. It can see through smoke and clouds. Quantum illumination is similar to ghost imaging but could provide an even greater sensitivity.

National and Commercial Efforts
Countries across the world are making major investments ~$24 billion in 2021 – in quantum research and applications.

Lessons Learned

  • Quantum technologies are emerging and disruptive to companies and defense
  • Quantum technologies cover Quantum Computing, Quantum Communications and Quantum Sensing and Metrology
    • Quantum computing could obsolete existing cryptography systems
    • Quantum communication could allow secure cryptography key distribution and networking of quantum sensors and computers
    • Quantum sensors could make the ocean transparent for Anti-submarine warfare, create unjammable A2/AD, detect stealth aircraft, find hidden tunnels and weapons of mass destruction, etc.
  • A few of these technologies are available now, some in the next 5 years and a few are a decade or more out
  • Tens of billions of public and private capital dollars are being invested in them
  • Defense applications will come first
  • The largest commercial applications won’t be the ones we currently think they’re going to be
    • when they do show up they’ll destroy existing businesses and create new ones

The Semiconductor Ecosystem – Explained

The last year has seen a ton written about the semiconductor industry: chip shortages, the CHIPS Act, our dependence on Taiwan and TSMC, China, etc.

But despite all this talk about chips and semiconductors, few understand how the industry is structured. I’ve found the best way to understand something complicated is to diagram it out, step by step. So here’s a quick pictorial tutorial on how the industry works.


The Semiconductor Ecosystem

We’re seeing the digital transformation of everything. Semiconductors – chips that process digital information — are in almost everything: computers, cars, home appliances, medical equipment, etc. Semiconductor companies will sell $600 billion worth of chips this year.

Looking at the figure below, the industry seems pretty simple. Companies in the semiconductor ecosystem make chips (the triangle on the left) and sell them to companies and government agencies (on the right). Those companies and government agencies then design the chips into systems and devices (e.g. iPhones, PCs, airplanes, cloud computing, etc.), and sell them to consumers, businesses, and governments. The revenue of products that contain chips is worth tens of trillions of dollars.

Yet, given how large it is, the industry remains a mystery to most.  If you do think of the semiconductor industry at all, you may picture workers in bunny suits in a fab clean room (the chip factory) holding a 12” wafer. Yet it is a business that manipulates materials an atom at a time and its factories cost 10s of billions of dollars to build.  (By the way, that wafer has two trillion transistors on it.)

If you were able to look inside the simple triangle representing the semiconductor industry, instead of a single company making chips, you would find an industry with hundreds of companies, all dependent on each other. Taken as a whole it’s pretty overwhelming, so let’s describe one part of the ecosystem at a time.  (Warning –  this is a simplified view of a very complex industry.)

Semiconductor Industry Segments

The semiconductor industry has seven different types of companies. Each of these distinct industry segments feeds its resources up the value chain to the next until finally a chip factory (a “Fab”) has all the designs, equipment, and materials necessary to manufacture a chip. Taken from the bottom up these semiconductor industry segments are:

  1. Chip Intellectual Property (IP) Cores
  2. Electronic Design Automation (EDA) Tools
  3. Specialized Materials
  4. Wafer Fab Equipment (WFE)
  5. “Fabless” Chip Companies
  6. Integrated Device Manufacturers (IDMs)
  7. Chip Foundries
  8. Outsourced Semiconductor Assembly and Test (OSAT)

The following sections below provide more detail about each of these eight semiconductor industry segments.

Chip Intellectual Property (IP) Cores

  • The design of a chip may be owned by a single company, or…
  • Some companies license their chip designs – as software building blocks, called IP Cores – for wide use
  • There are over 150 companies that sell chip IP Cores
  • For example, Apple licenses IP Cores from ARM as a building block of their microprocessors in their iPhones and Computers

Electronic Design Automation (EDA) Tools

  • Engineers design chips (adding their own designs on top of any IP cores they’ve bought) using specialized Electronic Design Automation (EDA) software
  • The industry is dominated by three U.S. vendors – Cadence, Mentor (now part of Siemens) and Synopsys
  • It takes a large engineering team using these EDA tools 2-3 years to design a complex logic chip like a microprocessor used inside a phone, computer or server. (See the figure of the design process below.)

  • Today, as logic chips continue to become more complex, all Electronic Design Automation companies are beginning to insert Artificial Intelligence aids to automate and speed up the process

Specialized Materials and Chemicals

So far our chip is still in software. But to turn it into something tangible we’re going to have to physically produce it in a chip factory called a “fab.” The factories that make chips need to buy specialized materials and chemicals:

  • Silicon wafers – and to make those they need crystal growing furnaces
  • Over 100 Gases are used – bulk gases (oxygen, nitrogen, carbon dioxide, hydrogen, argon, helium), and other exotic/toxic gases (fluorine, nitrogen trifluoride, arsine, phosphine, boron trifluoride, diborane, silane, and the list goes on…)
  • Fluids (photoresists, top coats, CMP slurries)
  • Photomasks
  • Wafer handling equipment, dicing
  • RF Generators


Wafer Fab Equipment (WFE) Make the Chips

  • These machines physically manufacture the chips
  • Five companies dominate the industry – Applied Materials, KLA, LAM, Tokyo Electron and ASML
  • These are some of the most complicated (and expensive) machines on Earth. They take a slice of an ingot of silicon and manipulate its atoms on and below its surface
  • We’ll explain how these machines are used a bit later on

 “Fabless” Chip Companies

  • Systems companies (Apple, Qualcomm, Nvidia, Amazon, Facebook, etc.) that previously used off-the-shelf chips now design their own chips.
  • They create chip designs (using IP Cores and their own designs) and send the designs to “foundries” that have “fabs” that manufacture them
  • They may use the chips exclusively in their own devices e.g. Apple, Google, Amazon ….
  • Or they may sell the chips to everyone e.g. AMD, Nvidia, Qualcomm, Broadcom…
  • They do not own Wafer Fab Equipment or use specialized materials or chemicals
  • They do use Chip IP and Electronic Design Software to design the chips


Integrated Device Manufacturers (IDMs)

  • Integrated Device Manufacturers (IDMs) design, manufacture (in their own fabs), and sell their own chips
    • They do not make chips for other companies (this is changing rapidly – see here.)
    • There are three categories of IDMs– Memory (e.g. Micron, SK Hynix), Logic (e.g. Intel), Analog (TI, Analog Devices)
  • They have their own “fabs” but may also use foundries
    • They use Chip IP and Electronic Design Software to design their chips
    • They buy Wafer Fab Equipment and use specialized materials and chemicals
  • The average cost of taping out a new leading-edge chip (3nm) is now $500 million

 Chip Foundries

  • Foundries make chips for others in their “fabs”
  • They buy and integrate equipment from a variety of manufacturers
    • Wafer Fab Equipment and specialized materials and chemicals
  • They design unique processes using this equipment to make the chips
  • But they don’t design chips
  • TSMC in Taiwan is the leader in logic, Samsung is second
  • Other fabs specialize in making chips for analog, power, rf, displays, secure military, etc.
  • It costs $20 billon to build a new generation chip (3nm) fabrication plant

Fabs

  • Fabs are short for fabrication plants – the factory that makes chips
  • Integrated Device Manufacturers (IDMs) and Foundries both have fabs. The only difference is whether they make chips for others to use or sell or make them for themselves to sell.
  • Think of a Fab as analogous to a book printing plant (see figure below)
  1. Just as an author writes a book using a word processor, an engineer designs a chip using electronic design automation tools
  2. An author contracts with a publisher who specializes in their genre and then sends the text to a printing plant. An engineer selects a fab appropriate for their type of chip (memory, logic, RF, analog)
  3. The printing plant buys paper and ink. A fab buys raw materials; silicon, chemicals, gases
  4. The printing plant buys printing machinery, presses, binders, trimmers. The fab buys wafer fab equipment, etchers, deposition, lithography, testers, packaging
  5. The printing process for a book uses offset lithography, filming, stripping, blueprints, plate making, binding and trimming. Chips are manufactured in a complicated process manipulating atoms using etchers, deposition, lithography. Think of it as an atomic level offset printing. The wafers are then cut up and the chips are packaged
  6. The plant turns out millions of copies of the same book. The plant turns out millions of copies of the same chip

While this sounds simple, it’s not. Chips are probably the most complicated products ever manufactured.  The diagram below is a simplified version of the 1000+ steps it takes to make a chip.

Outsourced Semiconductor Assembly and Test (OSAT)

  • Companies that package and test chips made by foundries and IDMs
  • OSAT companies take the wafer made by foundries, dice (cut) them up into individual chips, test them and then package them and ship them to the customer

 

Fab Issues

  • As chips have become denser (with trillions of transistors on a single wafer) the cost of building fabs have skyrocketed – now >$10 billion for one chip factory
  • One reason is that the cost of the equipment needed to make the chips has skyrocketed
    • Just one advanced lithography machine from ASML, a Dutch company, costs $150 million
    • There are ~500+ machines in a fab (not all as expensive as ASML)
    • The fab building is incredibly complex. The clean room where the chips are made is just the tip of the iceberg of a complex set of plumbing feeding gases, power, liquids all at the right time and temperature into the wafer fab equipment
  • The multi-billion-dollar cost of staying at the leading edge has meant most companies have dropped out. In 2001 there were 17 companies making the most advanced chips.  Today there are only two – Samsung in Korea and TSMC in Taiwan.
    • Given that China believes Taiwan is a province of China this could be problematic for the West.

What’s Next – Technology

It’s getting much harder to build chips that are denser, faster, and use less power, so what’s next?

  • Instead of making a single processor do all the work, logic chip designers have put multiple specialized processors inside of a chip
  • Memory chips are now made denser by stacking them 100+ layers high
  • As chips are getting more complex to design, which means larger design teams, and longer time to market, Electronic Design Automation companies are embedding artificial intelligence to automate parts of the design process
  • Wafer equipment manufacturers are designing new equipment to help fabs make chips with lower power, better performance, optimum area-to-cost, and faster time to market

What’s Next – Business

The business model of Integrated Device Manufacturers (IDMs) like Intel is rapidly changing. In the past there was a huge competitive advantage in being vertically integrated i.e. having your own design tools and fabs. Today, it’s a disadvantage.

  • Foundries have economies of scale and standardization. Rather than having to invent it all themselves, they can utilize the entire stack of innovation in the ecosystem. And just focus on manufacturing
  • AMD has proven that it’s possible to shift from an IDM to a fabless foundry model. Intel is trying. They are going to use TSMC as a foundry for their own chips as well as set up their own foundry

What’s Next – Geopolitics

Controlling advanced chip manufacturing in the 21st century may well prove to be like controlling the oil supply in the 20th. The country that controls this manufacturing can throttle the military and economic power of others.

  • Ensuring a steady supply of chips has become a national priority. (China’s largest import by $’s are semiconductors – larger than oil)
  • Today, both the U.S. and China are rapidly trying to decouple their semiconductor ecosystems from each other; China is pouring $100+ billion of government incentives in building Chinese fabs, while simultaneously trying to create indigenous supplies of wafer fab equipment and electronic design automation software
  • Over the last few decades the U.S. moved most of its fabs to Asia. Today we are incentivizing bringing fabs and chip production back to the U.S.

An industry that previously was only of interest to technologists is now one of the largest pieces in great power competition.

What’s Plan B? – The Small, the Agile, and the Many

This post previously appeared in the Proceedings of the Naval Institute.


One of the most audacious and bold manifestos for the future of Naval innovation has just been posted by the Rear Admiral who heads up the Office of Naval Research. It may be the hedge we need to deter China in the South China Sea.


While You Were Out
In the two decades since 9/11, while the U.S. was fighting Al-Qaeda and ISIS, China built new weapons and developed new operational concepts to negate U.S. military strengths. They’ve built ICBMs with conventional warheads to hit our aircraft carriers. They converted reefs in international waters into airbases, creating unsinkable aircraft carriers that extend the range of their aircraft and are armed with surface to air missiles make it dangerous to approach China’s mainland and Taiwan.

To evade our own fleet air defense systems, they’ve armed their missiles with maneuvering warheads, and to reduce our reaction time they have missiles that travel at hypersonic speed.

The sum of these Chinese offset strategies means that in the South China Sea the U.S. can no longer deter a war because we can longer guarantee we can win one.

This does not bode well for our treaty allies, Japan, the Philippines, and South Korea. Control of the South China Sea would allow China to control fishing operations and oil and gas exploration; to politically coerce other countries bordering in the region; to enforce an air defense identification zone (ADIZ) over the South China Sea; or to enforce a blockade around Taiwan or invade it.

What To Do About It?
Today the Navy has aircraft carriers, submarines, surface combatants, aircraft, and sensors under the sea and in space. Our plan to counter to China can be summed up as, more of the same but better and more tightly integrated.

This might be the right strategy. However, what if we’re wrong? What if our assumptions about the survivability of these naval platforms and the ability of our marines to operate, were based on incorrect assumption about our investments in material, operational concepts and mental models?

If so, it might be prudent for the Navy to have a hedge strategy. Think of a hedge as a “just in case” strategy. It turns out the Navy had one in WWII. And it won the war in the Pacific.

War Plan Orange
In the 1930s U.S. war planners thought about a future war with Japan. The result was “War Plan Orange” centered on the idea that ultimately, American battleships would engage the Japanese fleet in a gunnery battle, which the U.S. would win.

Unfortunately for us Japan didn’t adhere to our war plan. They were bolder and more imaginative than we were. Instead of battleships, they used aircraft carriers to attack us. The U.S. woke up on Dec. 7, 1941, with most of our battleships sitting on the bottom of Pearl Harbor. The core precept of War Plan Orange went to the bottom with it.

But the portfolio of options available to Admiral Nimitz and President Roosevelt were not limited to battleships. They had a hedge strategy in place in case the battleships were not the solution. The hedges? Aircraft carriers and submarines.

While the U.S. Navy’s primary investment pre-WW2 was in battleships, the Navy had also made a substantial alternative investment – in aircraft carriers and submarines. The Navy launched the first aircraft carrier in 1920. For the next two decades they ran fleet exercises with them. At the beginning of the war the U.S. Navy had seven aircraft carriers (CVs) and one aircraft escort vessel (AVG). By the end of the war the U.S. had built 111 carriers. (24 fleet carriers, 9 light carriers and 78 escort carriers.) 12 were sunk.

As it turned out, it was carriers, subs, and the Marines who won the Pacific conflict.

Our Current Plan
Fast forward to today. For the last 80 years the carriers in a Carrier Strike Group and submarines remain the preeminent formation for U.S. naval warfare.

China has been watching us operate and fight in this formation for decades. But what if carrier strike groups can no longer win a fight? What if the U.S. is underestimating China’s capabilities, intents, imagination, and operating concepts? What if they can disable or destroy our strike groups (via cyber, conventionally armed ICBMs, cruise missiles, hypersonics, drones, submarines, etc.)? If that’s a possibility, then what is the Navy’s 21st-century hedge? What is its Plan B?

Says Who?
Here’s where this conversation gets interesting. While I have an opinion, think tanks have an opinion, and civilians in the Pentagon have an opinion, RAdm Lorin Selby, the Chief of the Office of Naval Research (ONR), has more than just “an opinion.” ONR is the Navy’s science and technology systems command. Its job is to see over the horizon and think about what’s possible. Selby was previously deputy commander of the Naval Sea Systems Command (NAVSEA) and commander of the Naval Surface Warfare Centers (NSWC). As the chief engineer of the Navy, he was the master of engineering the large and the complex.

What follows is my paraphrasing RADM Selby’s thinking about a hedge strategy the Navy needs and how they should get there.

Diversification
A hedge strategy is built on the premise that you invest in different things, not more or better versions of the same.

If you look at the Navy force structure today and its plan for the next decade, at first glance you might say they have a diversified portfolio and a plan for more. The Navy has aircraft carriers, submarines, surface combatants, and many types of aircraft. And they plan for a distributed fleet architecture, including 321 to 372 manned ships and 77 to 140 large, unmanned vehicles.

But there is an equally accurate statement that this is not a diversified portfolio because all these assets share many of the same characteristics:

  • They are all large compared to their predecessors
  • They are all expensive – to the point where the Navy can’t afford the number of platforms our force structure assessments suggest they need
  • They are all multi-mission and therefore complex
  • The system-to-system interactions to create these complex integrations drive up cost and manufacturing lead times
  • Long manufacturing lead times mean they have no surge capacity
  • They are acquired on a requirements model that lags operational identification of need by years…sometimes decades when you fold in the construction span times for some of these complex capabilities like carriers or submarines
  • They are difficult to modernize – The ability to update the systems aboard these platforms, even the software systems, still takes years to accomplish

If the primary asset of the U.S. fleet now and in the future is the large and the complex, then surely there must be a hedge, a Plan B somewhere? (Like the pre-WW2 aircraft carriers.)  In fact, there isn’t. The Navy has demos of alternatives, but there is no force structure built on a different set of principles that would complicate China’s plans and create doubt in our adversaries of whether they could prevail in a conflict.

The Hedge Strategy – Create “the small, the agile, and the many”
In a world where the large and the complex are either too expensive to generate en masse or potentially too vulnerable to put at risk, “the small, the agile, and the many” has the potential to define the future of Navy formations.

We need formations composed of dozens, hundreds, or even thousands of unmanned vehicles above, below, and on the ocean surface. We need to build collaborating, autonomous formations…NOT a collection of platforms.

This novel formation is going to be highly dependent on artificial intelligence and new software that enables cross-platform collaboration and human machine teaming.

To do this we need a different world view. One that is no longer tied to large 20th-century industrial systems, but to a 21st-century software-centric agile world.

The Selby Manifesto:

  • Digitally adept naval forces will outcompete forces organized around principle of industrial optimization. “Data is the new oil and software is the new steel”
  • The systems engineering process we have built over the last 150 years is not optimal for software-based systems.
    • Instead, iterative design approaches dominate software design
  • The Navy has world-class engineering and acquisition processes to deal with hardware
    • but applying the same process and principles to digital systems is a mistake
  • The design principles that drive software companies are fundamentally different than those that drive industrial organizations.
  • Applying industrial-era principles to digital era technologies is a recipe for failure
  • The Navy has access to amazing capabilities that already exist. And part of our challenge will be to integrate those capabilities together in novel ways that allow new modes of operation and more effectiveness against operational priorities
  • There’s an absolute need to foster a collaborative partnership with academia and businesses – big businesses, small businesses, and startups
  • This has serious implication of how the Navy and Marine Corps needs to change. What do we need to change when it comes to engineering and operating concepts?

How To Get “The Small, The Agile, and The Many” Tested and In The Water?
Today, “the small, the agile and the many” have been run in war games, exercises, simulations, and small demonstrations, but not built at scale in a formation of dozens, hundreds, or even thousands of unmanned vehicles above, below and on the ocean’s surface. We need to prove whether these systems can fight alongside our existing assets (or independently if required).

ONR plans to rapidly prove that this idea works, and that the Navy can build it. Or they will disprove the theory. Either way the Navy needs to know quickly whether they have a hedge. Time is not on our side in the South China Sea.

ONR’s plan is to move boldly. They’re building this new “small, the agile, and the many”formation on digital principles and they’re training a new class of program managers – digital leaders – to guide the journey through the complex software and data.

They are going to partner with industry using rapid, simple, and accountable acquisition processes, using it to get through the gauntlet of discussions to contract in short time periods so we can get to work. And these processes are going to excite new partners and allies.

They’re going to use all the ideas already on the shelves, whether government shelves or commercial shelves, and focus on what can be integrated and then what must be invented.

All the while they’ve been talking to commanders in fleets around the world. And taking a page from digital engineering practices, instead of generating a list of requirements, they’re building to the operational need by asking “what is the real problem?” They are actively listening, using Lean and design thinking to hear and understand the problems, to build a minimal viable product – a prototype solution – and get it into the water. Then asking, did that solve the problem…no? Why not? Okay, we are going to go fix it and come back in a few months, not years.

The goal is to demonstrate this novel naval formation virtually, digitally, and then physically with feedback from in water experiments. Ultimately the goal is getting agile prototyping out to sea and doing it faster than ever before.

In the end the goal is to effectively evaluate the idea of the small, the agile, and the many. How to iterate at scale and at speed. How to take things that meet operational needs and make them part of the force structure, deploying them in novel naval formations, learning their operational capabilities, not just their technical merits. If we’re successful, then we can help guarantee the rest of century.

What Can Go Wrong?
During the Cold War the U.S. prided itself on developing offset strategies, technical or operational concepts that leapfrogged the Soviet Union. Today China has done that to us. They’ve surprised us with multiple offset strategies, and more are likely to come. The fact is that China is innovating faster than the Department of Defense, they’ve gotten inside our DoD OODA loop.

But China is not innovating faster than our nation as a whole. Innovation in our commercial ecosystem — in AI, machine learning, autonomy, commercial access to space, cyber, biotech, semiconductors (all technologies the DoD and Navy need) — continues to solve the toughest problems at speed and scale, attracting the best and the brightest with private capital that dwarfs the entire DoD R&E (research and engineering) budget.

RADM Selby’s plan of testing the hedge of “the small, the agile, and the many” using tools and technologies of the 21st century is exactly the right direction for the Navy.

However, in peacetime bold, radical ideas are not welcomed. They disrupt the status quo. They challenge existing reporting structures, and in a world of finite budgets, money has to be taken from existing programs and primes or programs even have to be killed to make the new happen. Even when positioned as a hedge, existing vendors, existing Navy and DoD organizations, existing political power centers, will all see “the small, the agile, and the many” as a threat. It challenges careers, dollars, and mindsets. Many will do their best to impede, kill or co-opt this idea.

We are outmatched in the South China Sea. And the odds are getting longer each year. In a war with China we won’t have years to rebuild our Navy.

A crisis is an opportunity to clear out the old to make way for the new. If senior leadership of the Navy, DoD, executive branch, and Congress truly believe we need to win this fight, that this is a crisis, then ONR and “the small, the agile, and the many” needs a direct report to the Secretary of the Navy and the budget and authority to make this happen.

The Navy and the country need a hedge. Let’s get started now.

The Gordian Knot Center for National Security Innovation at Stanford

penitus cogitare, cito agere – think deeply, act quickly

75 years ago, the Office of Naval Research (ONR) helped kickstart innovation in Silicon Valley with a series of grants to Fred Terman, Dean of Stanford’s Engineering school. Terman used the money to set up the Stanford Electronics Research Lab. He staffed it with his lab managers who built the first electronic warfare and electronic intelligence systems in WWII. This lab pushed the envelope of basic and applied research in microwave devices and electronics and within a few short years made Stanford a leader in these fields. The lab became ground zero for the wave of Stanford’s entrepreneurship and innovation in the 1950’s and 60’s and helped form what would later be called Silicon Valley.

75 years later, ONR just laid down a bet again, one we believe will be equally transformative. They’re the first sponsors of the new Gordian Knot Center for National Security Innovation at Stanford that Joe Felter, Raj Shah, and I have started.


Gordian What?

A Gordian Knot is a metaphor for an intractable problem. Today, the United States is facing several seemingly intractable national security problems simultaneously.

We intend to help solve them in Stanford’s Gordian Knot Center for National Security Innovation. Our motto of penitus cogitare, cito agere, think deeply, act quickly, embraces our unique intersection of deep problem understanding, combined with rapid solutions. The Center combines six unique strengths of Stanford and its location in Silicon Valley.

  1. The insights and expertise of Stanford international and national security policy leaders
  2. The technology insights and expertise of Stanford Engineering
  3. Exceptional students willing to help the country win the Great Power Competition
  4. Silicon Valley’s deep technology ecosystem
  5. Our experience in rapid problem understanding, rapid iteration and deployment of solutions with speed and urgency
  6. Access to risk capital at scale

Our focus will match our motto. We’re going to coordinate resources at Stanford and peer universities, and across Silicon Valley’s innovation ecosystem to:

  • Scale national security innovation education
  • Train national security innovators
  • Offer insight, integration, and policy outreach
  • Provide a continual output of minimal viable products that can act as catalysts for solutions to the toughest problems

Why Now? Why Us?

Over the last decade we’ve created a series of classes in entrepreneurship, rapid innovation, and national security: Lean LaunchPad; National Science Foundation I-Corps; Hacking for Defense; Hacking for Diplomacy; Technology, Innovation and Modern War last year; and this year Technology, Innovation and Great Power Competition. These classes have been widely adopted, across the U.S. and globally.

Simultaneously, each of us was actively engaged in helping different branches of the government understand, react, and deliver solutions in a rapidly changing and challenging environment. It’s become clear to us that for the first time in three decades, the U.S. is now engaged in a Great Power Competition. And we’re behind. Our national power (our influence and footprint on the world stage) is being challenged and effectively negated by autocratic regimes like China and Russia.

GKC joins a select group of national security think tanks

At Stanford, the Gordian Knot Center will sit in the Freeman Spogli Institute for International Studies run by Mike McFaul, ex ambassador to Russia. And Mike has graciously agreed to be our Principal Investigator along with Riitta Katila in the Management Science and Engineering Department (MS&E) in the Engineering School. MS&E is where disruptive technology meets national security, and has a long history of brilliant contributions from Bill Perry, Sig Hecker and Elisabeth Pate-Cornell and others. (Stanford’s other policy institute is the Hoover Institution, run by Condoleezza Rice, ex secretary of state). All are world-class leaders in understanding international problems, policies, and institutions. Other U.S. foreign affairs and national security think tanks include:

We intend to focus the new Center on solving problems across the spectrum of activities that create and sustain national power. National power is the combination of a country’s diplomacy (soft power and alliances), information, military and economic strength as well as its finance, intelligence, and law enforcement – or DIME-FIL. Our projects will be those at the intersection of DIME-FIL with the onslaught of commercial technologies (AI, machine learning, autonomy, biotech, cyber, semiconductors, commercial access to space, et al.). And we’re going to hit the ground running by moving our two national security classes — Hacking for Defense, and Technology Innovation and Great Power Competition (which this year is now a required course in the International Policy program) — into the Center.

We hope our unique charter, “think deeply, act quickly” can complement the extraordinary work these other institutions provide.

The Office of Naval Research (ONR)

The Office of Naval Research (ONR) has been planning, fostering, and encouraging scientific research—and reimagining naval power—since 1946. The grants it made to Stanford that year were the first to any university.

Today, the Navy and the U.S. Marine Corps is looking to find ways to accelerate technology development and delivery to our naval forces. There is broad consensus that the current pace of technology development and adoption is unsatisfactory, and that without significant reform, we will lose the competition with China in the South China Sea for maritime superiority.

Rear Admiral Selby, Chief of Naval Research, has recognized that it’s no longer “business as usual.” That ONR delivering sustaining innovations for the existing fleet and marine forces is no longer good enough to deter war or keep us in the fight. And that ONR once again needs to lead with disruptive technologies, new operational concepts, new types of program management and mindsets. He’s on a mission to provide the Navy and U.S. Marine Corps with just that. When we approached him about the idea of the Gordian Knot Center he reminded us, that not only did ONR sponsor Stanford in 1946, they’ve been sponsoring our Hacking for Defense class since 2016!  Now they’ve become our charter sponsor for the Gordian Knot Center.

We hope to earn it – for him, ONR, and the country.

Steve, Joe and Raj

Lessons Learned

The Center combines six unique strengths of Stanford and its location in Silicon Valley

  • The insights and expertise of Stanford international and national security policy leaders
  • The technology insights and expertise of Stanford Engineering
  • Exceptional students willing to help the country win the Great Power Competition
  • Silicon Valley’s deep technology ecosystem
  • Our experience in rapid problem understanding, rapid iteration and deployment of solutions with speed and urgency
  • Access to risk capital at scale

Our focus will match our motto. We’re going to coordinate resources at Stanford and peer universities and across Silicon Valley’s innovation ecosystem to:

  • Scale national security innovation education
  • Train national security innovators
  • Offer insight, integration, and policy outreach
  • Provide a continual output of minimal viable products that can act as catalysts for solutions to the toughest problems