Artificial Intelligence and Machine Learning– Explained

Artificial Intelligence is a once-in-a lifetime commercial and defense game changer

(download a PDF of this article here)

Hundreds of billions in public and private capital is being invested in Artificial Intelligence (AI) and Machine Learning companies. The number of patents filed in 2021 is more than 30 times higher than in 2015 as companies and countries across the world have realized that AI and Machine Learning will be a major disruptor and potentially change the balance of military power.

Until recently, the hype exceeded reality. Today, however, advances in AI in several important areas (here, here, here, here and here) equal and even surpass human capabilities.

If you haven’t paid attention, now’s the time.

Artificial Intelligence and the Department of Defense (DoD)
The Department of Defense has thought that Artificial Intelligence is such a foundational set of technologies that they started a dedicated organization- the JAIC – to enable and implement artificial intelligence across the Department. They provide the infrastructure, tools, and technical expertise for DoD users to successfully build and deploy their AI-accelerated projects.

Some specific defense related AI applications are listed later in this document.

We’re in the Middle of a Revolution
Imagine it’s 1950, and you’re a visitor who traveled back in time from today. Your job is to explain the impact computers will have on business, defense and society to people who are using manual calculators and slide rules. You succeed in convincing one company and a government to adopt computers and learn to code much faster than their competitors /adversaries. And they figure out how they could digitally enable their business – supply chain, customer interactions, etc. Think about the competitive edge they’d have by today in business or as a nation. They’d steamroll everyone.

That’s where we are today with Artificial Intelligence and Machine Learning. These technologies will transform businesses and government agencies. Today, 100s of billions of dollars in private capital have been invested in 1,000s of AI startups. The U.S. Department of Defense has created a dedicated organization to ensure its deployment.

But What Is It?
Compared to the classic computing we’ve had for the last 75 years, AI has led to new types of applications, e.g. facial recognition; new types of algorithms, e.g. machine learning; new types of computer architectures, e.g. neural nets; new hardware, e.g. GPUs; new types of software developers, e.g. data scientists; all under the overarching theme of artificial intelligence. The sum of these feels like buzzword bingo. But they herald a sea change in what computers are capable of doing, how they do it, and what hardware and software is needed to do it.

This brief will attempt to describe all of it.

New Words to Define Old Things
One of the reasons the world of AI/ML is confusing is that it’s created its own language and vocabulary. It uses new words to define programming steps, job descriptions, development tools, etc. But once you understand how the new world maps onto the classic computing world, it starts to make sense. So first a short list of some key definitions.

AI/ML – a shorthand for Artificial Intelligence/Machine Learning

Artificial Intelligence (AI) – a catchall term used to describe “Intelligent machines” which can solve problems, make/suggest decisions and perform tasks that have traditionally required humans to do. AI is not a single thing, but a constellation of different technologies.

Machine Learning (ML) – a subfield of artificial intelligence. Humans combine data with algorithms (see here for a list) to train a model using that data. This trained model can then make predications on new data (is this picture a cat, a dog or a person?) or decision-making processes (like understanding text and images) without being explicitly programmed to do so.

Machine learning algorithms – computer programs that adjust themselves to perform better as they are exposed to more data. The “learning” part of machine learning means these programs change how they process data over time. In other words, a machine-learning algorithm can adjust its own settings, given feedback on its previous performance in making predictions about a collection of data (images, text, etc.).

Deep Learning/Neural Nets – a subfield of machine learning. Neural networks make up the backbone of deep learning. (The “deep” in deep learning refers to the depth of layers in a neural network.) Neural nets are effective at a variety of tasks (e.g., image classification, speech recognition). A deep learning neural net algorithm is given massive volumes of data, and a task to perform – such as classification. The resulting model is capable of solving complex tasks such as recognizing objects within an image and translating speech in real time. In reality, the neural net is a logical concept that gets mapped onto a physical set of specialized processors. See here.)

Data Science – a new field of computer science. Broadly it encompasses data systems and processes aimed at maintaining data sets and deriving meaning out of them. In the context of AI, it’s the practice of people who are doing machine learning.

Data Scientists – responsible for extracting insights that help businesses make decisions. They explore and analyze data using machine learning platforms to create models about customers, processes, risks, or whatever they’re trying to predict.

What’s Different? Why is Machine Learning Possible Now?
To understand why AI/Machine Learning can do these things, let’s compare them to computers before AI came on the scene. (Warning – simplified examples below.)

Classic Computers

For the last 75 years computers (we’ll call these classic computers) have both shrunk to pocket size (iPhones) and grown to the size of warehouses (cloud data centers), yet they all continued to operate essentially the same way.

Classic Computers – Programming
Classic computers are designed to do anything a human explicitly tells them to do. People (programmers) write software code (programming) to develop applications, thinking a priori about all the rules, logic and knowledge that need to be built in to an application so that it can deliver a specific result. These rules are explicitly coded into a program using a software language (Python, JavaScript, C#, Rust, …).

Classic Computers –  Compiling
The code is then compiled using software to translate the programmer’s source code into a version that can be run on a target computer/browser/phone. For most of today’s programs, the computer used to develop and compile the code does not have to be that much faster than the one that will run it.

Classic Computers – Running/Executing Programs
Once a program is coded and compiled, it can be deployed and run (executed) on a desktop computer, phone, in a browser window, a data center cluster, in special hardware, etc. Programs/applications can be games, social media, office applications, missile guidance systems, bitcoin mining, or even operating systems e.g. Linux, Windows, IOS. These programs run on the same type of classic computer architectures they were programmed in.

Classic Computers – Software Updates, New Features
For programs written for classic computers, software developers receive bug reports, monitor for security breaches, and send out regular software updates that fix bugs, increase performance and at times add new features.

Classic Computers-  Hardware
The CPUs (Central Processing Units) that write and run these Classic Computer applications all have the same basic design (architecture). The CPUs are designed to handle a wide range of tasks quickly in a serial fashion. These CPUs range from Intel X86 chips, and the ARM cores on Apple M1 SoC, to the z15 in IBM mainframes.

Machine Learning

In contrast to programming on classic computing with fixed rules, machine learning is just like it sounds – we can train/teach a computer to “learn by example” by feeding it lots and lots of examples. (For images a rule of thumb is that a machine learning algorithm needs at least 5,000 labeled examples of each category in order to produce an AI model with decent performance.) Once it is trained, the computer runs on its own and can make predictions and/or complex decisions.

Just as traditional programming has three steps – first coding a program, next compiling it and then running it – machine learning also has three steps: training (teaching), pruning and inference (predicting by itself.)

Machine Learning – Training
Unlike programing classic computers with explicit rules, training is the process of “teaching” a computer to perform a task e.g. recognize faces, signals, understand text, etc. (Now you know why you’re asked to click on images of traffic lights, cross walks, stop signs, and buses or type the text of scanned image in ReCaptcha.) Humans provide massive volumes of “training data” (the more data, the better the model’s performance) and select the appropriate algorithm to find the best optimized outcome. (See the detailed “machine learning pipeline” section for the gory details.)

By running an algorithm selected by a data scientist on a set of training data, the Machine Learning system generates the rules embedded in a trained model. The system learns from examples (training data), rather than being explicitly programmed. (See the “Types of Machine Learning” section for more detail.) This self-correction is pretty cool. An input to a neural net results in a guess about what that input is. The neural net then takes its guess and compares it to a ground-truth about the data, effectively asking an expert “Did I get this right?” The difference between the network’s guess and the ground truth is its error. The network measures that error, and walks the error back over its model, adjusting weights to the extent that they contributed to the error.)

Just to make the point again: The algorithms combined with the training data – not external human computer programmers – create the rules that the AI uses. The resulting model is capable of solving complex tasks such as recognizing objects it’s never seen before, translating text or speech, or controlling a drone swarm.

(Instead of building a model from scratch you can now buy, for common machine learning tasks, pretrained models from others and here, much like chip designers buying IP Cores.)

Machine Learning Training – Hardware
Training a machine learning model is a very computationally intensive task. AI hardware must be able to perform thousands of multiplications and additions in a mathematical process called matrix multiplication. It requires specialized chips to run fast. (See the AI semiconductor section for details.)

Machine Learning – Simplification via pruning, quantization, distillation
Just like classic computer code needs to be compiled and optimized before it is deployed on its target hardware, the machine learning models are simplified and modified (pruned) to use less computing power, energy, and  memory before they’re deployed to run on their hardware.

Machine Learning – Inference Phase
Once the system has been trained it can be copied to other devices and run. And the computing hardware can now make inferences (predictions) on new data that the model has never seen before.

Inference can even occur locally on edge devices where physical devices meet the digital world (routers, sensors, IOT devices), close to the source of where the data is generated. This reduces network bandwidth issues and eliminates latency issues.

Machine Learning Inference – Hardware
Inference (running the model) requires substantially less compute power than training. But inference also benefits from specialized AI chips. (See the AI semiconductor section for details.)

Machine Learning – Performance Monitoring and Retraining
Just like classic computers where software developers do regular software updates to fix bugs and increase performance and add features, machine learning models also need to be updated regularly by adding new data to the old training pipelines and running them again. Why?

Over time machine learning models get stale. Their real-world performance generally degrades over time if they are not updated regularly with new training data that matches the changing state of the world. The models need to be monitored and retrained regularly for data and/or concept drift, harmful predictions, performance drops, etc. To stay up to date, the models need to re-learn the patterns by looking at the most recent data that better reflects reality.

One Last Thing – “Verifiability/Explainability”
Understanding how an AI works is essential to fostering trust and confidence in AI production models.

Neural Networks and Deep Learning differ from other types of Machine Learning algorithms in that they have low explainability. They can generate a prediction, but it is very difficult to understand or explain how it arrived at its prediction. This “explainability problem” is often described as a problem for all of AI, but it’s primarily a problem for Neural Networks and Deep Learning. Other types of Machine Learning algorithms – for example decision trees or linear regression– have very high explainability. The results of the five-year DARPA Explainable AI Program (XAI) are worth reading here.

So What Can Machine Learning Do?

It’s taken decades but as of today, on its simplest implementations, machine learning applications can do some tasks better and/or faster than humans. Machine Learning is most advanced and widely applied today in processing text (through Natural Language Processing) followed by understanding images and videos (through Computer Vision) and analytics and anomaly detection. For example:

Recognize and Understand Text/Natural Language Processing
AI is better than humans on basic reading comprehension benchmarks like SuperGLUE and SQuAD and their performance on complex linguistic tasks is almost there. Applications: GPT-3, M6, OPT-175B, Google Translate, Gmail Autocomplete, Chatbots, Text summarization.

Write Human-like Answers to Questions and Assist in Writing Computer Code
An AI can write original text that is indistinguishable from that created by humans. Examples GPT-3, Wu Dao 2.0 or generate computer code. Example GitHub Copilot, Wordtune

Recognize and Understand Images and video streams
An AI can see and understand what it sees. It can identify and detect an object or a feature in an image or video. It can even identify faces. It can scan news broadcasts or read and assess text that appears in videos. It has uses in threat detection –  airport security, banks, and sporting events. In medicine to interpret MRI’s or to design drugs. And in retail to scan and analyze in-store imagery to intuitively determine inventory movement. Examples of ImageNet benchmarks here and here

Turn 2D Images into 3D Rendered Scenes
AI using “NeRFs “neural radiance fields” can take 2d snapshots and render a finished 3D scene in realtime to create avatars or scenes for virtual worlds, to capture video conference participants and their environments in 3D, or to reconstruct scenes for 3D digital maps. The technology is an enabler of the metaverse, generating digital representations of real environments that creators can modify and build on. And self driving cars are using NeRF’s to render city-scale scenes spanning multiple blocks.

Detect Changes in Patterns/Recognize Anomalies
An AI can recognize patterns which don’t match the behaviors expected for a particular system, out of millions of different inputs or transactions. These applications can discover evidence of an attack on financial networks, fraud detection in insurance filings or credit card purchases; identify fake reviews; even tag sensor data in industrial facilities that mean there’s a safety issue. Examples here, here and here.

Power Recommendation Engines
An AI can provide recommendations based on user behaviors used in ecommerce to provide accurate suggestions of products to users for future purchases based on their shopping history. Examples: Netflix, TikTok, CrossingMinds and Recommendations AI

Recognize and Understand Your Voice
An AI can understand spoken language. Then it can comprehend what is being said and in what context. This can enable chatbots to have a conversation with people. It can record and transcribe meetings. (Some versions can even read lips to increase accuracy.) Applications: Siri/Alexa/Google Assistant. Example here

Create Artificial Images
AI can ​create artificial ​images​ (DeepFakes) that ​are​ indistinguishable ​from​ real ​ones using Generative Adversarial Networks.​ Useful in ​entertainment​, virtual worlds, gaming, fashion​ design, etc. Synthetic faces are now indistinguishable and more trustworthy than photos of real people. Paper here.

Create Artist Quality Illustrations from A Written Description
AI can generate images from text descriptions, creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways. An example application is Dall-E

Generative Design of Physical Products
Engineers can input design goals into AI-driven generative design software, along with parameters such as performance or spatial requirements, materials, manufacturing methods, and cost constraints. The software explores all the possible permutations of a solution, quickly generating design alternatives. Example here.

Sentiment Analysis
An AI leverages deep natural language processing, text analysis, and computational linguistics to gain insight into customer opinion, understanding of consumer sentiment, and measuring the impact of marketing strategies. Examples: Brand24, MonkeyLearn

What Does this Mean for Businesses?

Skip this section if you’re interested in national security applications

Hang on to your seat. We’re just at the beginning of the revolution. The next phase of AI, powered by ever increasing powerful AI hardware and cloud clusters, will combine some of these basic algorithms into applications that do things no human can. It will transform business and defense in ways that will create new applications and opportunities.

Human-Machine Teaming
Applications with embedded intelligence have already begun to appear thanks to massive language models. For example – Copilot as a pair-programmer in Microsoft Visual Studio VSCode. It’s not hard to imagine DALL-E 2 as an illustration assistant in a photo editing application, or GPT-3 as a writing assistant in Google Docs.

AI in Medicine
AI applications are already appearing in radiology, dermatology, and oncology. Examples: IDx-DR,OsteoDetect, Embrace2.  AI Medical image identification can automatically detect lesions, and tumors with diagnostics equal to or greater than humans. For Pharma, AI will power drug discovery design for finding new drug candidates. The FDA has a plan for approving AI software here and a list of AI-enabled medical devices here.

Autonomous Vehicles
Harder than it first seemed, but car companies like Tesla will eventually get better than human autonomy for highway driving and eventually city streets.

Decision support
Advanced virtual assistants can listen to and observe behaviors, build and maintain data models, and predict and recommend actions to assist people with and automate tasks that were previously only possible for humans to accomplish.

Supply chain management
AI applications are already appearing in predictive maintenance, risk management, procurement, order fulfillment, supply chain planning and promotion management.

Marketing
AI applications are already appearing in real-time personalization, content and media optimization and campaign orchestration to augment, streamline and automate marketing processes and tasks constrained by human costs and capability, and to uncover new customer insights and accelerate deployment at scale.

Making business smarter: Customer Support
AI applications are already appearing in virtual customer assistants with speech recognition, sentiment analysis, automated/augmented quality assurance and other technologies providing customers with 24/7 self- and assisted-service options across channels.

AI in National Security

Much like the dual-use/dual-nature of classical computers AI developed for commercial applications can also be used for national security.

AI/ML and Ubiquitous Technical Surveillance
AI/ML have made most cities untenable for traditional tradecraft. Machine learning can integrate travel data (customs, airline, train, car rental, hotel, license plate readers…,) integrate feeds from CCTV cameras for facial recognition and gait recognition, breadcrumbs from wireless devices and then combine it with DNA sampling. The result is automated persistent surveillance.

China’s employment of AI as a tool of repression and surveillance of the Uyghurs is a reminder of a dystopian future of how totalitarian regimes will use AI-enabled ubiquitous surveillance to repress and monitor its own populace.

AI/ML on the Battlefield
AI will enable new levels of performance and autonomy for weapon systems. Autonomously collaborating assets (e.g., drone swarms, ground vehicles) that can coordinate attacks, ISR missions, & more.

Fusing and making sense of sensor data (detecting threats in optical /SAR imagery, classifying aircraft based on radar returns, searching for anomalies in radio frequency signatures, etc.) Machine learning is better and faster than humans in finding targets hidden in a high-clutter background. Automated target detection and fires from satellite/UAV.

For example, an Unmanned Aerial Vehicle (UAV) or Unmanned Ground Vehicles with on board AI edge computers could use deep learning to detect and locate concealed chemical, biological and explosive threats by fusing imaging sensors and chemical/biological sensors.

Other examples include:

Use AI/ML countermeasures against adversarial, low probability of intercept/low probability of detection (LPI/LPD) radar techniques in radar and communication systems.

Given sequences of observations of unknown radar waveforms from arbitrary emitters without a priori knowledge, use machine learning to develop behavioral models to enable inference of radar intent and threat level, and to enable prediction of future behaviors.

For objects in space, use machine learning to predict and characterize a spacecrafts possible actions, its subsequent trajectory, and what threats it can pose from along that trajectory. Predict the outcomes of finite burn, continuous thrust, and impulsive maneuvers.

AI empowers other applications such as:

AI/ML in Collection
The front end of intelligence collection platforms has created a firehose of data that have overwhelmed human analysts. “Smart” sensors coupled with inference engines that can pre-process raw intelligence and prioritize what data to transmit and store –helpful in degraded or low-bandwidth environments.

Human-Machine Teaming in Signals Intelligence
Applications with embedded intelligence have already begun to appear in commercial applications thanks to massive language models. For example – Copilot as a pair-programmer in Microsoft Visual Studio VSCode. It’s not hard to imagine an AI that can detect and isolate anomalies and other patterns of interest in all sorts of signal data faster and more reliably than human operators.

AI-enabled natural language processing, computer vision, and audiovisual analysis can vastly reduce manual data processing. Advances in speech-to-text transcription and language analytics now enable reading comprehension, question answering, and automated summarization of large quantities of text. This not only prioritizes the work of human analysts, it’s a major force multiplier

AI can also be used to automate data conversion such as translations and decryptions, accelerating the ability to derive actionable insights.

Human-Machine Teaming in Tasking and Dissemination
AI-enabled systems will automate and optimize tasking and collection for platforms, sensors, and assets in near-real time in response to dynamic intelligence requirements or changes in the environment.

AI will be able to automatically generate machine-readable versions of intelligence products and disseminate them at machine speed so that computer systems across the IC and the military can ingest and use them in real time without manual intervention.

Human-Machine Teaming in Exploitation and Analytics
AI-enabled tools can augment filtering, flagging, and triage across multiple data sets. They can identify connections and correlations more efficiently and at a greater scale than human analysts, and can flag those findings and the most important content for human analysis.

AI can fuse data from multiple sources, types of intelligence, and classification levels to produce accurate predictive analysis in a way that is not currently possible. This can improve indications and warnings for military operations and active cyber defense.

AI/ML Information warfare
Nation states have used AI systems to enhance disinformation campaigns and cyberattacks. This included using “DeepFakes” (fake videos generated by a neural network that are nearly indistinguishable from reality). They are harvesting data on Americans to build profiles of our beliefs, behavior, and biological makeup for tailored attempts to manipulate or coerce individuals.

But because a large percentage of it is open-source AI is not limited to nation states, AI-powered cyber-attacks, deepfakes and AI software paired with commercially available drones can create “poor-man’s smart weapons” for use by rogue states, terrorists and criminals.

AI/ML Cyberwarfare
AI-enabled malware can learn and adapt to a system’s defensive measures, by probing a target system to look for system configuration and operational patterns and customize the attack payload to determine the most opportune time to execute the payload so to maximize the impact. Conversely, AI-enabled cyber-defensive tools can proactively locate and address network anomalies and system vulnerabilities.

Attacks Against AI – Adversarial AI
As AI proliferates, defeating adversaries will be predicated on defeating their AI and vice versa. As Neural Networks take over sensor processing and triage tasks, a human may only be alerted if the AI deems it suspicious. Therefore, we only need to defeat the AI to evade detection, not necessarily a human.

Adversarial attacks against AI fall into three types:

AI Attack Surfaces
Electronic Attack (EA), Electronic Protection (EP), Electronic Support (ES) all have analogues in the AI algorithmic domain. In the future, we may play the same game about the “Algorithmic Spectrum,” denying our adversaries their AI capabilities while defending ours. Other can steal or poison our models  or manipulate our training data.

What Makes AI Possible Now?

 Four changes make Machine Learning possible now:

  1. Massive Data Sets
  2. Improved Machine Learning algorithms
  3. Open-Source Code, Pretrained Models and Frameworks
  4. More computing power

Massive Data Sets
Machine Learning algorithms tend to require large quantities of training data in order to produce high-performance AI models. (Training OpenAI’s GPT-3 Natural Language Model with 175 billion parameters takes 1,024 Nvidia A100 GPUs more than one month.) Today, strategic and tactical sensors pour in a firehose of images, signals and other data. Billions of computers, digital devices and sensors connected to the Internet, producing and storing large volumes of data, which provide other sources of intelligence. For example facial recognition requires millions of labeled images of faces for training data.

Of course more data only helps if the data is relevant to your desired application. Training data needs to match the real-world operational data very, very closely to train a high-performing AI model.

Improved Machine Learning algorithms
The first Machine Learning algorithms are decades old, and some remain incredibly useful. However, researchers have discovered new algorithms that have greatly sped up the fields cutting-edge. These new algorithms have made Machine Learning models more flexible, more robust, and more capable of solving different types of problems.

Open-Source Code, Pretrained Models and Frameworks
Previously, developing Machine Learning systems required a lot of expertise and custom software development that made it out of reach for most organizations. Now open-source code libraries and developer tools allow organizations to use and build upon the work of external communities. No team or organization has to start from scratch, and many parts that used to require highly specialized expertise have been automated. Even non-experts and beginners can create useful AI tools. In some cases, open-source ML models can be entirely reused and purchased. Combined with standard competitions, open source, pretrained models and frameworks have moved the field forward faster than any federal lab or contractor. It’s been a feeding frenzy with the best and brightest researchers trying to one-up each other to prove which ideas are best.

The downside is that, unlike past DoD technology development – where the DoD leads it, can control it, and has the most advanced technology (like stealth and electronic warfare), in most cases the DoD will not have the most advanced algorithms or models. The analogy for AI is closer to microelectronics than it is EW. The path forward for the DoD should be supporting open research, but optimizing on data set collection, harvesting research results, and fast application. 

More computing power – special chips
Machine Learning systems require a lot of computing power. Today, it’s possible to run Machine Learning algorithms on massive datasets using commodity Graphics Processing Units (GPUs). While many of the AI performance improvements have been due to human cleverness on better models and algorithms, most of the performance gains have been the massive increase in compute performance.  (See the semiconductor section.)

More computing power – AI In the Cloud
The rapid growth in the size of machine learning models has been achieved by the move to large data center clusters. The size of machine learning models are limited by time to train them. For example, in training images, the size of the model scales with the number of pixels in an image. ImageNet Model sizes are 224×224 pixels. But HD (1920×1080) images require 40x more computation/memory. Large Natural Language Processing models – e.g. summarizing articles, English-to-Chinese translation like OpenAI’s GPT-3 require enormous models. GPT-3 uses 175 billion parameters and was trained on a cluster with 1,024 Nvidia A100 GPUs that cost ~$25 million! (Which is why large clusters exist in the cloud, or the largest companies/ government agencies.) Facebook’s Deep Learning and Recommendation Model (DLRM) was trained on 1TB data and has 24 billion parameters. Some cloud vendors train on >10TB data sets.

Instead of investing in massive amounts of computers needed for training companies can use the enormous on-demand, off-premises hardware in the cloud (e.g. Amazon AWS, Microsoft Azure) for both training machine learning models and deploying inferences.

We’re Just Getting Started
Progress in AI has been growing exponentially. The next 10 years will see a massive improvement on AI inference and training capabilities. This will require regular refreshes of the hardware– on the chip and cloud clusters – to take advantage. This is the AI version of Moore’s Law on steroids – applications that are completely infeasible today will be easy in 5 years.

What Can’t AI Do?

While AI can do a lot of things better than humans when focused on a narrow objective, there are many things it still can’t do. AI works well in specific domain where you have lots of data, time/resources to train, domain expertise to set the right goals/rewards during training, but that is not always the case.

For example AI models are only as good as the fidelity and quality of the training data. Having bad labels can wreak havoc on your training results. Protecting the integrity of the training data is critical.

In addition, AI is easily fooled by out-of-domain data (things it hasn’t seen before). This can happen by “overfitting” – when a model trains for too long on sample data or when the model is too complex, it can start to learn the “noise,” or irrelevant information, within the dataset. When the model memorizes the noise and fits too closely to the training set, the model becomes “overfitted,” and it is unable to generalize well to new data. If a model cannot generalize well to new data, then it will not be able to perform the classification or prediction tasks it was intended for. However, if you pause too early or exclude too many important features, you may encounter the opposite problem, and instead, you may “underfit” your model. Underfitting occurs when the model has not trained for enough time, or the input variables are not significant enough to determine a meaningful relationship between the input and output variables.

AI is also poor at estimating uncertainty /confidence (and explaining its decision-making). It can’t choose its own goals. (Executives need to define the decision that the AI will execute.  Without well-defined decisions to be made, data scientists will waste time, energy and money.) Except for simple cases an AI can’t (yet) figure out cause and effect or why something happened. It can’t think creatively or apply common sense.

AI is not very good at creating a strategy (unless it can pull from previous examples and mimic them, but then fails with the unexpected.) And it lacks generalized intelligence e.g. that can generalize knowledge and translate learning across domains.

All of these are research topics actively being worked on. Solving these will take a combination of high-performance computing, advanced AI/ML semiconductors, creative machine learning implementations and decision science. Some may be solved in the next decade, at least to a level where a human can’t tell the difference.

Where is AI in Business Going Next?

Skip this section if you’re interested in national security applications

Just as classic computers were applied to a broad set of business, science and military applications, AI is doing the same. AI is exploding not only in research and infrastructure (which go wide) but also in the application of AI to vertical problems (which go deep and depend more than ever on expertise). Some of the new applications on the horizon include Human AI/Teaming (AI helping in programming and decision making), smarter robotics and autonomous vehicles, AI-driven drug discovery and design, healthcare diagnostics, chip electronic design automation, and basic science research.

Advances in language understanding are being pursued to create systems that can summarize complex inputs and engage through human-like conversation, a critical component of next-generation teaming.

Where is AI and National Security Going Next?

In the near future AI may be able to predict the future actions an adversary could take and the actions a friendly force could take to counter these. The 20th century model loop of Observe–Orient–Decide and Act (OODA) is retrospective; an observation cannot be made until after the event has occurred. An AI-enabled decision-making cycle might be ‘sense–predict–agree–act’: AI senses the environment; predicts what the adversary might do and offers what a future friendly force response should be; the human part of the human–machine team agrees with this assessment; and AI acts by sending machine-to-machine instructions to the small, agile and many autonomous warfighting assets deployed en masse across the battlefield.

An example of this is DARPA’s ACE (Air Combat Evolution) program that is developing a warfighting concept for combined arms using a manned and unmanned systems. Humans will fight in close collaboration with autonomous weapon systems in complex environments with tactics informed by artificial intelligence.

A Once-in-a-Generation Event
Imagine it’s the 1980’s and you’re in charge of an intelligence agency. SIGINT and COMINT were analog and RF. You had worldwide collection systems with bespoke systems in space, air, underwater, etc. And you wake up to a world that shifts from copper to fiber. Most of your people, and equipment are going to be obsolete, and you need to learn how to capture those new bits. Almost every business processes needed to change, new organizations needed to be created, new skills were needed, and old ones were obsoleted. That’s what AI/ML is going to do to you and your agency.

The primary obstacle to innovation in national security is not technology, it is culture. The DoD and IC must overcome a host of institutional, bureaucratic, and policy challenges to adopting and integrating these new technologies. Many parts of our culture are resistant to change, reliant on traditional tradecraft and means of collection, and averse to risk-taking, (particularly acquiring and adopting new technologies and integrating outside information sources.)

History tells us that late adopters fall by the wayside as more agile and opportunistic governments master new technologies.

Carpe Diem.

Want more Detail?

Read on if you want to know about Machine Learning chips, see a sample Machine Learning Pipeline and learn about the four types of Machine Learning.

 

Artificial Intelligence/Machine Learning Semiconductors

Skip this section if all you need to know is that special chips are used for AI/ML.

AI/ML, semiconductors, and high-performance computing are intimately intertwined  – and progress in each is dependent on the others.  (See the “Semiconductor Ecosystem” report.)

Some machine learning models can have trillions of parameters and require a massive number of specialized AI chips to run. Edge computers are significantly less powerful than the massive compute power that’s located at data centers and the cloud. They need low power and specialized silicon.

Why Dedicated AI Chips and Chip Speed Matter
Dedicated chips for neutral nets (e.g. Nvidia GPUs, Xilinx FPUs, Google TPUs) are faster than conventional CPUs for three reasons: 1) they use parallelization, 2) they have larger memory bandwidth and 3) they have fast memory access.

There are three types of AI Chips:

  • Graphics Processing Units (GPUs) – Thousands of cores, parallel workloads, widespread use in machine learning
  • Field-Programmable Gate Arrays (FPGAs) – Good for algorithms; compression, video encoding, cryptocurrency,  genomics, search. Needs specialists to program
  • Application-Specific Integrated Circuits (ASICs) – custom chips e.g. Google TPU’s

Matrix multiplication plays a big part in neural network computations, especially if there are many layers and nodes. Graphics Processing Units (GPUs) contain 100s or 1,000s of cores that can do these multiplications simultaneously. And neural networks are inherently parallel which means that it’s easy to run a program across the cores and clusters of these processors. That makes AI chips 10s or even 1,000s of times faster and more efficient than classic CPUs for training and inference of AI algorithms. State-of-the-art AI chips are dramatically more cost-effective than state-of-the-art CPUs as a result of their greater efficiency for AI algorithms.

Cutting-edge AI systems require not only AI-specific chips, but state-of-the-art AI chips. Older AI chips incur huge energy consumption costs that quickly balloon to unaffordable levels. Using older AI chips today means overall costs and slowdowns at least an order of magnitude greater than for state-of- the-art AI chips.

Cost and speed make it virtually impossible to develop and deploy cutting-edge AI algorithms without state-of-the-art AI chips. Even with state-of-the-art AI chips, training a large AI algorithm can cost tens of millions of dollars and take weeks to complete. With general-purpose chips like CPUs or older AI chips, this training would take much longer and cost orders of magnitude more, making staying at the R&D frontier impossible. Similarly, performing inference using less advanced or less specialized chips could involve similar cost overruns and take orders of magnitude longer.

In addition to off-the-shelf AI chips from Nvidia, Xlinix and Intel, large companies like Facebook, Google, Amazon, have designed their own chips to accelerate AI. The opportunity is so large that there are hundreds of AI accelerator startups designing their own chips, funded by 10’s of billions of venture capital and private equity. None of these companies own a chip manufacturing plant (a fab) so they all use a foundry (an independent company that makes chips for others) like TSMC in Taiwan (or SMIC in China for for its defense related silicon.)

A Sample of AI GPU, FPGA and ASIC AI Chips and Where They’re Made

IP (Intellectual Property) Vendors Also Offer AI Accelerators
AI chip designers can buy AI IP Cores – prebuilt AI accelerators from Synopsys (EV7x,) Cadence (Tensilica AI,) Arm (Ethos,) Ceva (SensPro2, NeuPro), Imagination (Series4,) ThinkSilicon (Neox,) FlexLogic (eFPGA,) Edgecortix and others.

Other AI Hardware Architectures
Spiking Neural Networks (SNN) is a completely different approach from Deep Neural Nets. A form of Neuromorphic computing it tries to emulate how a brain works. SNN neurons use simple counters and adders—no matrix multiply hardware is needed and power consumption is much lower. SNNs are good at unsupervised learning – e.g. detecting patterns in unlabeled data streams. Combined with their low power they’re a good fit for sensors at the edge. Examples: BrainChip, GrAI Matter, Innatera, Intel.

Analog Machine Learning AI chips use analog circuits to do the matrix multiplication in memory. The result is extremely low power AI for always-on sensors. Examples: Mythic (AMP,) Aspinity (AML100,) Tetramem.

Optical (Photonics) AI Computation promise performance gains over standard digital silicon, and some are nearing production. They use intersecting coherent light beams rather than switching transistors to perform matrix multiplies. Computation happens in picoseconds and requires only power for the laser. (Though off-chip digital transitions still limit power savings.) Examples: Lightmatter, Lightelligence, Luminous, Lighton.

AI Hardware for the Edge
As more AI moves to the edge, the Edge AI accelerator market is segmenting into high-end chips for camera-based systems and low-power chips for simple sensors. For example:

AI Chips in Autonomous vehicles, Augmented Reality and multicamera surveillance systems These inference engines require high performance. Examples: Nvidia (Orin,) AMD (Versal,) Qualcomm (Cloud AI 100,) and acquired Arriver for automotive software.

AI Chips in Cameras for facial recognition, surveillance. These inference chips require a balance of processing power with low power. Putting an AI chip in each camera reduces latency and bandwidth. Examples: Hailo-8, Ambarella CV5S,  Quadric (Q16), (RealTek 3916N).

Ultralow-Power AI Chips Target IoT Sensors – IoT devices require very simple neural networks and can run for years on a single battery. Example applications: Presence detection, wakeword detection, gunshot detection… Examples: Syntiant (NDP,) Innatera, BrainChip

Running on the edge devices are deep learning models such as OmniMLFoghorn, specifically designed for edge accelerators.

AI/ML Hardware Benchmarks
While there are lots of claims about how much faster each of these chips are for AI/ML there are now a set of standard benchmarks –  MLCommons. These benchmarks were created by Google, Baidu, Stanford, Harvard and U.C. Berkeley.

One Last Thing – Non-Nvidia AI Chips and the “Nvidia Software Moat”
New AI accelerator chips have to cross the software moat that Nvidia has built around their GPU’s. As popular AI applications and frameworks are built on Nvidia CUDA software platform,  if new AI Accelerator vendors want to port these applications to their chips they have to build their own drivers, compiler, debugger, and other tools.

Details of a machine learning pipeline

This is a sample of the workflow (a pipeline) data scientists use to develop, deploy and maintain a machine learning model (see the detailed description here.)

The Types of Machine Learning

skip this section if you want to believe it’s magic.

Machine Learning algorithms fall into four classes:

  1. Supervised Learning
  2. Unsupervised Learning
  3. Semi-supervised Learning
  4. Reinforcement Learning

They differ based on:

  • What types of data their algorithms can work with
  • For supervised and unsupervised learning, whether or not the training data is labeled or unlabeled
  • How the system receives its data inputs

Supervised Learning

  • A “supervisor” (a human or a software system) accurately labels each of the training data inputs with its correct associated output
  • Note that pre-labeled data is only required for the training data that the algorithm uses to train the AI mode
  • In operation in the inference phase the AI will be generating its own labels, the accuracy of which will depend on the AI’s training
  • Supervised Learning can achieve extremely high performance, but they require very large, labeled datasets
  • Using labeled inputs and outputs, the model can measure its accuracy and learn over time
  • For images a rule of thumb is that the algorithm needs at least 5,000 labeled examples of each category in order to produce an AI model with decent performance
  • In supervised learning, the algorithm “learns” from the training dataset by iteratively making predictions on the data and adjusting for the correct answer.
  • While supervised learning models tend to be more accurate than unsupervised learning models, they require upfront human intervention to label the data appropriately.

Supervised Machine Learning – Categories and Examples:

  • Classification problems – use an algorithm to assign data into specific categories, such as separating apples from oranges. Or classify spam in a separate folder from your inbox. Linear classifiers, support vector machines, decision trees and random forest are all common types of classification algorithms.
  • Regression– understands the relationship between dependent and independent variables. Helpful for predicting numerical values based on different data points, such as sales revenue projections for a given business. Some popular regression algorithms are linear regression, logistic regression and polynomial regression.
  • Example algorithms include: Logistic Regression and Back Propagation Neural Networks

Unsupervised Learning

  • These algorithms can analyze and cluster unlabeled data sets. They discover hidden patterns in data without the need for human intervention (hence, they are “unsupervised”)
  • They can extract features from the data without a label for the results
  • For an image classifier, an unsupervised algorithm would not identify the image as a “cat” or a “dog.” Instead, it would sort the training dataset into various groups based on their similarity
  • Unsupervised Learning systems are often less predictable, but as unlabeled data is usually more available than labeled data, they are important
  • Unsupervised algorithms are useful when developers want to understand their own datasets and see what properties might be useful in either developing automation or change operational practices and policies
  • They still require some human intervention for validating the output 

Unsupervised Machine Learning – Categories and Examples

  • Clustering groups unlabeled data based on their similarities or differences. For example, K-means clustering algorithms assign similar data points into groups, where the K value represents the size of the grouping and granularity. This technique is helpful for market segmentation, image compression, etc.
  • Association finds relationships between variables in a given dataset. These methods are frequently used for market basket analysis and recommendation engines, along the lines of “Customers Who Bought This Item Also Bought” recommendations.
  • Dimensionality reduction is used when the number of features  (or dimensions) in a given dataset is too high. It reduces the number of data inputs to a manageable size while also preserving the data integrity. Often, this technique is used in the preprocessing data stage, such as when autoencoders remove noise from visual data to improve picture quality.
  • Example algorithms include: Apriori algorithm and K-Means

Difference between supervised and unsupervised learning

The main difference: Labeled data

  • Goals: In supervised learning, the goal is to predict outcomes for new data. You know up front the type of results to expect. With an unsupervised learning algorithm, the goal is to get insights from large volumes of new data. The machine learning itself determines what is different or interesting from the dataset.
  • Applications: Supervised learning models are ideal for spam detection, sentiment analysis, weather forecasting and pricing predictions, among other things. In contrast, unsupervised learning is a great fit for anomaly detection, recommendation engines, customer personas and medical imaging.
  • ComplexitySupervised learning is a simple method for machine learning, typically calculated through the use of programs like R or Python. In unsupervised learning, you need powerful tools for working with large amounts of unclassified data. Unsupervised learning models are computationally complex because they need a large training set to produce intended outcomes.
  • Drawbacks: Supervised learning models can be time-consuming to train, and the labels for input and output variables require expertise. Meanwhile, unsupervised learning methods can have wildly inaccurate results unless you have human intervention to validate the output variables.

Semi-Supervised Learning

  • “Semi- Supervised” algorithms combine techniques from Supervised and Unsupervised algorithms for applications with a small set of labeled data and a large set of unlabeled data.
  • In practice, using them leads to exactly what you would expect, a mix of some of both of the strengths and weaknesses of Supervised and Unsupervised approaches
  • Typical algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data. An example is Generative Adversarial Networks trained on photographs can generate new photographs that look authentic to human observers (deep fakes)

Reinforcement Learning

  • Training data is collected by an autonomous, self-directed AI agent as it perceives its environment and performs goal-directed actions
  • The rewards are input data received by the AI agent when certain criteria are satisfied.
  • These criteria are typically unknown to the agent at the start of training
  • Rewards often contain only partial information. They don’t signal which inputs were good or not
  • The system is learning to take actions to maximize its receipt of cumulative rewards
  • Reinforcement AI can defeat humans– in chess, Go…
  • There are no labeled datasets for every possible move
  • There is no assessment of whether it was a “good or bad move
  • Instead, partial labels reveal the final outcome “win” or “lose”
  • The algorithms explore the space of possible actions to learn the optimal set of rules for determining the best action that maximize wins

Reinforcement Machine Learning – Categories and Examples

  • Algorithm examples include: DQN (Deep Q Network), DDPG (Deep Deterministic Policy Gradient), A3C (Asynchronous Advantage Actor-Critic Algorithm), NAF (Q-Learning with Normalized Advantage Functions), …
  • AlphaGo, a Reinforcement system played 4.9 million games of Go in 3 days against itself to learn how to play the game at a world-champion level
  • Reinforcement is challenging to use in the real world, as the real world is not as heavily bounded as video games and time cannot be sped up in the real world
  • There are consequences to failure in the real world

(download a PDF of this article here)

Sources:


Lessons for the DoD – From Ukraine and China

 Portions of this post previously appeared in War On the Rocks.


Looking at a satellite image of Ukraine online I realized it was from Capella Space – one of our Hacking for Defense student teams who now has 7 satellites in orbit.

National Security is Now Dependent on Commercial Technology
They’re not the only startup in this fight. An entire wave of new startups and scaleups are providing satellite imagery and analysis, satellite communications, and unmanned aerial vehicles supporting the struggle.

For decades, satellites that took detailed pictures of Earth were only available to governments and the high-resolution images were classified. Today, commercial companies have their own satellites providing unclassified imagery. The government buys and distributes commercial images from startups to supplement their own and shares them with Ukraine as part of a broader intelligence-sharing arrangement that the head of Defense Intelligence Agency described as “revolutionary.” By the end of the decade, there will be 1000 commercial satellites for every U.S. government satellite in orbit.

At the onset of the war in Ukraine, Russia launched a cyber-attack on Viasat’s KA-SAT satellite, which supplies Internet across Europe, including to Ukraine. In response, to a (tweeted) request from Ukraine’s vice prime minister, Elon Musk’s Starlink satellite company shipped thousands of their satellite dishes and got Ukraine back on the Internet. Other startups are providing portable cell towers – “backpackable” and fixed.  When these connect via satellite link, they can provide phone service and WIFI capability. Another startup is providing a resilient, mesh local area network for secure tactical communications supporting ground units.

Drone technology was initially only available to national governments and militaries but is now democratized to low price points and available as internet purchases. In Ukraine, drones from startups are being used as automated delivery vehicles for resupply, and for tactical reconnaissance to discover where threats are. When combined with commercial satellite imagery, this enables pinpoint accuracy to deliver maximum kinetic impact in stopping opposing forces.

Equipment from large military contractors and other countries is also part of the effort. However, the equipment listed above is available commercially off-the-shelf, at dramatically cheaper prices than what’s offered by the large existing defense contractors, and developed and delivered in a fraction of the time. The Ukraine conflict is demonstrating the changing character of war such that low-cost emerging commercial technology is extremely effective when deployed against a larger 20th-century industrialized force that Russia is fielding.

While we should celebrate the organizations that have created and fielded these systems, the battle for the Ukraine illustrates much larger issues in the Department of Defense.

For the first time ever our national security is inexorably intertwined with commercial technology (drones, AI, machine learning, autonomy, biotech, cyber, semiconductors, quantum, high-performance computing, commercial access to space, et al.) And as we’re seeing on the Ukrainian battlefield they are changing the balance of power.

The DoD’s traditional suppliers of defense tools, technologies, and weapons – the prime contractors and federal labs – are no longer the leaders in these next-generation technologies – drones, AI, machine learning, semiconductors, quantum, autonomy, biotech, cyber, quantum, high performance computing, et al. They know this and know that weapons that can be built at a fraction of the cost and upgraded via software will destroy their existing business models.

Venture capital and startups have spent 50 years institutionalizing the rapid delivery of disruptive innovation. In the U.S., private investors spent $300 billion last year to fund new ventures that can move with the speed and urgency that the DoD now requires. Meanwhile China has been engaged in a Civil/Military Fusion program since 2015 to harness these disruptive commercial technologies for its national security needs.

China – Civil/Military Fusion
Every year the Secretary of Defense has to issue a formal report to Congress: Military and Security Developments Involving the People’s Republic of China. Six pages of this year’s report describe how China is combining its military-civilian sectors as a national effort for the PRC to develop a “world-class” military and become a world leader in science and technology. A key part of Beijing’s strategy includes developing and acquiring advanced dual-use technology. It’s worth thinking about what this means – China is not just using its traditional military contractors to build its defense ecosystem; they’re mobilizing their entire economy – commercial plus military suppliers. And we’re not.

DoD’s Civil/Military Orphan-Child – the Defense Innovation Unit
In 2015, before China started its Civil/Military effort, then-Secretary of Defense Ash Carter, saw the need for the DoD to understand, embrace and acquire commercial technology. To do so he started the Defense Innovation Unit (DIU). With offices in Silicon Valley, Austin, Boston, Chicago and Washington, DC, this is the one DoD organization with the staffing and mandate to match commercial startups or scaleups to pressing national security problems. DIU bridges the divide between DOD requirements and the commercial technology needed to address them with speed and urgency. It accelerates the connection of commercial technology to the military. Just as importantly, DIU helps the Department of Defense learn how to innovate at the same speed as tech-driven companies.

Many of the startups providing Ukraine satellite imagery and analysis, satellite communications, and unmanned aerial vehicles were found by the Defense Innovation Unit (DIU). Given that DIU is the Department of Defense’s most successful organization in developing and acquiring advanced dual-use technology, one would expect the department to scale the Defense Innovation Unit by a factor of ten. (Two years ago, the House Armed Services Committee in its Future of Defense Task Force report recommended exactly that—a 10X increase in budget.) The threats are too imminent and stakes too high not to do so.

So what happened?

Congress cut their budget by 20%.

And their well-regarded director just resigned in frustration because the Department is not resourcing DIU nor moving fast enough or broadly enough in adopting commercial technology.

Why? The Defense Ecosystem is at a turning point. Defense innovation threatens entrenched interests. Given that the Pentagon budget is essentially fixed, creating new vendors and new national champions of the next generation of defense technologies becomes a zero-sum game.

The Defense Innovation Unit (DIU) had no advocates in its chain of command willing to go to bat for it, let alone scale it.

The Department of Defense has world-class people and organization for a world that no longer exists
The Pentagon’s relationship with startups and commercial companies, already an arms-length one, is hindered by a profound lack of understanding about how the commercial innovation ecosystem works and its failure of imagination about what venture and private equity funded innovation could offer. In the last few years new venture capital and private equity firms have raised money to invest in dual-use startups. New startups focused on national security have sprung up and they and their investors have been banging on the closed doors of the defense department.

If we want to keep pace with our adversaries, we need to stop acting like we can compete with one hand tied behind our back. We need a radical reinvention of our civil/military innovation relationship. This would use Department of Defense funding, private capital, dual-use startups, existing prime contractors and federal labs in a new configuration that could look like this:


Create a new defense ecosystem encompassing startups, and mid-sized companies at the bleeding edge, prime contractors as integrators of advanced technology, federally funded R&D centers refocused on areas not covered by commercial tech (nuclear and hypersonics). Make it permanent by creating an innovation doctrine/policy.

Reorganize DoD Research and Engineering to allocate its budget and resources equally between traditional sources of innovation and new commercial sources of innovation.

  • Scale new entrants to the defense industrial base in dual-use commercial tech – AI/ML, Quantum, Space, drones, autonomy, biotech, underwater vehicles, shipyards, etc. that are not the traditional vendors. Do this by picking winners. Don’t give out door prizes. Contracts should be >$100M so high-quality venture-funded companies will play. And issue debt/loans to startups.

Reorganize DoD Acquisition and Sustainment to create and buy from new 21st century arsenals – new shipyards, drone manufacturers, etc. that can make 1,000’s of extremely low cost, attritable systems – “the small, the agile and the many.”

  • Acquire at Speed. Today, the average Department of Defense major acquisition program takes anywhere from nine to 26 years to get a weapon in the hands of a warfighter. DoD needs a requirements, budgeting and acquisition process that operates at commercial speed (18 months or less) which is 10x faster than DoD procurement cycles. Instead of writing requirements, the department should rapidly assess solutions and engage warfighters in assessing and prototyping commercial solutions. We’ll know we’ve built the right ecosystem when a significant number of major defense acquisition programs are from new entrants.

  • Acquire with a commercially oriented process. Congress has already granted the Department of Defense “Other Transaction Authority” (OTA) as a way to streamline acquisitions so they do not need to use Federal Acquisition Regulations (FAR). DIU has created a “Commercial Solutions Opening” to mirror a commercial procurement process that leverages OTA. DoD could be applying Commercial Solutions Openings on a much faster and broader scale.

Integrate and create incentives for the Venture Capital/Private Equity ecosystem to invest at scale. The most important incentive would be for DoD to provide significant contracts for new entrants. (One new entrant which DIU introduced, Anduril, just received a follow-on contract for $1 billion. This should be one of many such contracts and not an isolated example.) More examples could include: matching dollars for national security investments (similar to the SBIR program but for investors), public/private partnership investment funds, incentivize venture capital funds with no-carry loans (debt funding) to, or tax holidays and incentives – to get $10’s of billions of private investment dollars in technology areas of national interest.

Buy where we can; build where we must. Congress mandated that the Department of Defense should use commercial off-the-shelf technology wherever possible, but the department fails to do this (see industry letter to the Department of Defense).

Coordinate with Allies. Expand the National Security Innovation Base (NSIB) to an Allied Security Innovation Base. Source commercial technology from allies.

This is a politically impossible problem for the Defense Department to solve alone. Changes at this scale will require Congressional and executive office action. Hard to imagine in the polarized political environment. But not impossible.

Put Different People in Charge and reorganize around this new ecosystem. The threats, speed of change, and technologies the United States faces in this century require radically different mindsets and approaches than those it faced in the 20th century. Today’s leaders in the DoD, executive branch and Congress haven’t fully grasped the size, scale, and opportunity of the commercial innovation ecosystem or how to build innovation processes to move with the speed and urgency to match the pace China has set.


Change is hard – on the people and organizations inside the DoD who’ve spent years operating with one mindset to be asked to pivot to a new one.

But America’s adversaries have exploited the boundaries and borders between its defense and commercial and economic interests. Current approaches to innovation across the government — both in the past and under the current administration —  are piecemeal, incremental, increasingly less relevant, and insufficient.

These are not problems of technology. It takes imagination, vision and the willingness to confront the status quo. So far, all are currently lacking.

Russia’s Black Sea flagship Moskva on the bottom of the ocean and the thousands of its destroyed tanks illustrate the consequences of a defense ecosystem living in the past. We need transformation not half-measures. The U.S. Department of Defense needs to change.

Historically, major defense reforms have come from inside the DoD, at other times Congress (National Security Act of 1947, Goldwater-Nichols Act of 1986) and others from the President (Roosevelt’s creation of the Joint Chiefs in 1942, Eisenhower and the Department of Defense Reorganization Act of 1958.)

It may be that the changes needed are so broad that the DoD can’t make them and Congress needs to act. If so, it’s their time to step up.

Carpe diem. Seize the day.

Here’s What Happened When Deputy Secretary of Defense Dr. Kathleen Hicks visited Stanford’s Gordian Knot Center for National Security Innovation

It was an honor to host US Deputy Secretary of Defense Dr. Kathleen Hicks at Stanford’s Gordian Knot Center for National Security Innovation.  (Think of the Deputy Secretary of Defense as the Chief Operating Officer of a company – but in this case the company has 3 million employees (~1.4 million active duty, 750,000 civilians, ~800,000 in the National Guard and Reserves.)

She came to the Gordian Knot Center to discuss our unique approach to national security and innovation, and how our curriculum trains the next generation of innovators. The Deputy also heard from us how the Department can better partner with and leverage the U.S. innovation ecosystem to solve national security challenges.

Our goal for the Secretary’s visit was to give her a snapshot of how we’re supporting the Department of Defense priority of building an innovation workforce. We emphasized the critical distinction between a technical STEM-trained workforce (which we need) and an innovation workforce which we lack at scale.

Innovation incorporates lean methodologies (customer discovery, problem understanding, MVPs, Pivots), coupled with speed and urgency, and a culture where failure equals rapid learning. All of these are accomplished with minimal resources to deploy at scale products/services that are needed and wanted. We pointed out that Silicon Valley and Stanford have done this for 50 years. And China is outpacing us by adopting the very innovation methods we invented, integrating commercial technology with academic research, and delivering it to the Peoples Liberation Army.

Therein lies the focus of our Gordian Knot Center —connect STEM with policy education and leverage the synergies between the two to develop innovative leaders who understand technology and policy and can solve problems and deliver solutions at speed and scale.

 What We Presented
A key component of the Gordian Knot Center’s mission is to prepare and inspire future leaders to contribute meaningfully as part of the innovation work force. We combine the unique strengths of Stanford and its location in Silicon Valley to solve problems across the spectrum of activities that create and sustain national power. The range of resources and capabilities we bring to the fight from the center’s unique position include:

  • The insights and expertise of Stanford international and national security policy leaders
  • The technology insights and expertise of Stanford Engineering
  • Exceptional students willing to help the country win the Great Power Competition
  • Silicon Valley’s deep commercial technology ecosystem
  • Our experience in rapid problem understanding, rapid iteration and deployment of solutions with speed and urgency
  • Access to risk capital at scale

In the six months since we founded the Gordian Knot Center we have focused on six initiatives we wanted to share with Secretary Hicks. Rather than Joe Felter and I doing all of the talking, 25 of our students, scholars, mentors and alumni joined us to give the Secretary a 3-5 minute precis of their work, spanning across all six of the Gordian Knot initiatives.  Highlights of these presentations include:

  1. Hacking for Defense Teams – Vannevar Labs, FLIP, Disinformatix
  2. CONOPS Development
  3. National Security Education Technology, Innovation and Great Power Competition
  4. Defense Innovation Scholars Program – 25 students now, 50 by the end of the year
  5. Policy Impact and Outreach –ONR Hedge Strategy, NSC Quad Emerging Technology Track 1.5 Conference
  6. Internships and Professional Workforce Development – Innovation Workforce Vignettes

If you can’t see the slides click here

Throughout the over 90 minutes session, Dr. Hicks posed insightful questions for the students and told our gathering that one of her key priorities is to accelerate innovation adoption across DoD, including organizational structure, processes, culture, and people.

It was encouraging to hear the words.

However, from where we sit..

  1. Our national security is now inexorably intertwined with commercial technology and is hindered by our lack of an integrated strategy at the highest level.
  2. Our adversaries have exploited the boundaries and borders between our defense and commercial and economic interests.
  3. Our current approaches – both in the past and current administration – to innovation across the government are piecemeal, incremental, increasingly less relevant and insufficient.

Listening to the secretary’s conversations, I was further reminded of how much of a radical reinvention of our civil/military innovation relationship is necessary if we want to keep abreast of our adversaries. This would use DoD funding, private capital, dual-use startups, existing prime contractors and federal labs  in a new configuration. It would:

Create a new defense ecosystem encompassing startups, scaleups at the bleeding edge, prime contractors as integrators of advanced technology, federally funded R&D centers refocused on areas not covered by commercial tech (nuclear, hypersonics,…). Make it permanent by creating innovation doctrine/policy.

Create new national champions in dual-use commercial tech – AI/ML, Quantum, Space, drones, high performance computing, next gen networking, autonomy, biotech, underwater vehicles, shipyards, etc. who are not the traditional vendorsDo this by picking winners. Don’t give out door prizes. Contracts should be >$100M so high- quality venture-funded companies will play.  Until we have new vendors on the Major Defense Acquisition Program list, all we have in the DoD is innovation theater – not innovation.

Acquire at Speed. Today, the average DoD major acquisition program takes 9-26 years to get a weapon in the hands of a warfighter. We need a requirements, budgeting and acquisition process that operates at commercial speed (18 months or less) which is 10x faster than DoD procurement cycles. Instead of writing requirements, DoD should rapidly assess solutions and engage warfighters in assessing and prototyping commercial solutions.

Integrate and incent the Venture Capital/Private Equity ecosystem to invest at scale. Ask funders what it would take to invest at scale – e.g. create massive tax holidays and incentives to get investment dollars in technology areas of national interest.

Recruit and develop leaders across the Defense Department prepared to meet contempory threats and reorganize around this new innovation ecosystem. The DoD has world-class people and organization for a world that in many ways no longer exists. The threats, speed of change and technologies we face in this century will require radically different mindsets and approaches than those we faced in the 20th century. Today’s senior DoD leaders must think and act differently than their predecessors of a decade ago. Leaders at every level must now understand the commercial ecosystem and how to move with the speed and urgency that China is setting.

It was clear that Deputy Secretary Hicks understands the need for most of if not all these and more. Unfortunately, given the DoD budget is essentially fixed, creating new Primes and new national champions of the next generation of defense technologies becomes a zero-sum game. It’s a politically impossible problem for the Defense Department to solve alone. Changes at this scale will require Congressional action. Hard to imagine in the polarized political environment. But not impossible.

These are our challenges for not just the Gordian Knot Center for National Security Innovation but for our nation. We’ve taken them on, in the words of President John F. Kennedy,  “not because they are easy, but because they are hard. because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.”

The Quantum Technology Ecosystem – Explained

If you think you understand quantum mechanics,
you don’t understand quantum mechanics

Richard Feynman

IBM Quantum Computer

Tens of billions of public and private capital are being invested in Quantum technologies. Countries across the world have realized that quantum technologies can be a major disruptor of existing businesses and change the balance of military power. So much so, that they have collectively invested ~$24 billion in in quantum research and applications.

At the same time, a week doesn’t go by without another story about a quantum technology milestone or another quantum company getting funded. Quantum has moved out of the lab and is now the focus of commercial companies and investors. In 2021 venture capital funds invested over $2 billion in 90+ Quantum technology companies. Over a $1 billion of it going to Quantum computing companies. In the last six months quantum computing companies IonQ, D-Wave and Rigetti went public at valuations close to a billion and half dollars. Pretty amazing for computers that won’t be any better than existing systems for at least another decade – or more.  So why the excitement about quantum?

The Quantum Market Opportunity

While most of the IPOs have been in Quantum Computing, Quantum technologies are used in three very different and distinct markets: Quantum Computing, Quantum Communications and Quantum Sensing and Metrology.

All of three of these markets have the potential for being disruptive. In time Quantum computing could obsolete existing cryptography systems, but viable commercial applications are still speculative. Quantum communications could allow secure networking but are not a viable near-term business. Quantum sensors could create new types of medical devices, as well as new classes of military applications, but are still far from a scalable business.

It’s a pretty safe bet that 1) the largest commercial applications of quantum technologies won’t be the ones these companies currently think they’re going to be, and 2) defense applications using quantum technologies will come first. 3) if and when they do show up they’ll destroy existing businesses and create new ones.

We’ll describe each of these market segments in detail. But first a description of some quantum concepts.

Key Quantum Concepts

Skip this section if all you want to know is that 1) quantum works, 2) yes, it is magic.

Quantum  – The word “Quantum” refers to quantum mechanics which explains the behavior and properties of atomic or subatomic particles, such as electrons, neutrinos, and photons.

Superposition – quantum particles exist in many possible states at the same time. So a particle is described as a “superposition” of all those possible states. They fluctuate until observed and measured. Superposition underpins a number of potential quantum computing applications.

Entanglement – is what Einstein called “spooky action at a distance.” Two or more quantum objects can be linked so that measurement of one dictates the outcomes for the other, regardless of how far apart they are. Entanglement underpins a number of potential quantum communications applications.

Observation – Superposition and entanglement only exist as long as quantum particles are not observed or measured. If you observe the quantum state you can get information, but it results in the collapse of the quantum system.

Qubit – is short for a quantum bit. It is a quantum computing element that leverages the principle of superposition to encode information via one of four methods: spin, trapped atoms and ions, photons, or superconducting circuits.

Quantum Computers – Background

Quantum computers are a really cool idea. They harness the unique behavior of quantum physics—such as superposition, entanglement, and quantum interference—and apply it to computing.

In a classical computer transistors can represent two states – either a 0 or 1. Instead of transistors Quantum computers use quantum bits (called qubits.) Qubits exist in superposition – both in 0 and 1 state simultaneously.

Classic computers use transistors as the physical building blocks of logic. In quantum computers they may use trapped ions, superconducting loops, quantum dots or vacancies in a diamond. The jury is still out.

In a classic computer 2-14 transistors make up the seven basic logic gates (AND, OR, NAND, etc.) In a quantum computer building a single logical Qubit require a minimum of 9 but more likely 100’s or thousands of physical Qubits (to make up for error correction, stability, decoherence and fault tolerance.)

In a classical computer compute-power increases linearly with the number of transistors and clock speed. In a Quantum computer compute-power increases exponentially with the addition of each logical qubit.

But qubits have high error rates and need to be ultracold. In contrast classical computers have very low error rates and operate at room temperature.

Finally, classical computers are great for general purpose computing. But quantum computers can theoretically solve some complex algorithms/ problems exponentially faster than a classical computer. And with a sufficient number of logical Qubits they can become a Cryptographically Relevant Quantum Computer (CRQC).  And this is where Quantum computers become very interesting and relevant for both commercial and national security. (More below.)

Types of Quantum Computers

Quantum computers could potentially do things at speeds current computers cannot. Think of the difference of how fast you can count on your fingers versus how fast today’s computers can count. That’s the same order of magnitude speed-up a quantum computer could have over today’s computers for certain applications.

Quantum computers fall into four categories:

  1. Quantum Emulator/Simulator
  2. Quantum Annealer
  3. NISQ – Noisy Intermediate Scale Quantum
  4. Universal Quantum Computer – which can be a Cryptographically Relevant Quantum Computer (CRQC)

When you remove all the marketing hype, the only type that matters is #4 – a Universal Quantum Computer. And we’re at least a decade or more away from having those.

Quantum Emulator/Simulator
These are classical computers that you can buy today that simulate quantum algorithms. They make it easy to test and debug a quantum algorithm that someday may be able to run on a Universal Quantum Computer. Since they don’t use any quantum hardware they are no faster than standard computers.

Quantum Annealer is a special purpose quantum computer designed to only run combinatorial optimization problems, not general-purpose computing, or cryptography problems. D-Wave has defined and owned this space. While they have more physical Qubits than any other current system they are not organized as gate-based logical qubits. Currently this is a nascent commercial technology in search of a future viable market.

Noisy Intermediate-Scale Quantum (NISQ) computers. Think of these as prototypes of a Universal Quantum Computer – with several orders of magnitude fewer bits. (They currently have 50-100 qubits, limited gate depths, and short coherence times.) As they are short several orders of magnitude of Qubits, NISQ computers cannot perform any useful computation, however they are a necessary phase in the learning, especially to drive total system and software learning in parallel to the hardware development. Think of them as the training wheels for future universal quantum computers.

Universal Quantum Computers / Cryptographically Relevant Quantum Computers (CRQC)
This is the ultimate goal. If you could build a universal quantum computer with fault tolerance (i.e. millions of error corrected physical qubits resulting in thousands of logical Qubits), you could run quantum algorithms in cryptography, search and optimization, quantum systems simulations, and linear equations solvers. (See here for a list of hundreds quantum algorithms.) These all would dramatically outperform classical computation on large complex problems that grow exponentially as more variables are considered. Classical computers can’t attack these problems in reasonable times without so many approximations that the result is useless. We simply run out of time and transistors with classical computing on these problems. These special algorithms are what make quantum computers potentially valuable. For example, Grover’s algorithm solves the problem for the unstructured search of data. Further, quantum computers are very good at minimization / optimizations…think optimizing complex supply chains, energy states to form complex molecules, financial models, etc.

However, while all of these algorithms might have commercial potential one day, no one has yet to come up with a use for them that would radically transform any business or military application. Except for one – and that one keeps people awake at night.

It’s Shor’s algorithm for integer factorization – an algorithm that underlies much of existing public cryptography systems.

The security of today’s public key cryptography systems rests on the assumption that breaking into those with a thousand or more digits is practically impossible. It requires factoring into large prime numbers (e.g., RSA) or elliptic curve (e.g., ECDSA, ECDH) or finite fields (DSA) that can’t be done with any type of classic computer regardless of how large. Shor’s factorization algorithm can crack these codes if run on a Universal Quantum Computer. Uh-oh!

Impact of a Cryptographically Relevant Quantum Computer (CRQC) Skip this section if you don’t care about cryptography.

Not only would a Universal Quantum Computer running Shor’s algorithm make today’s public key algorithms (used for asymmetric key exchanges and digital signatures) useless, someone can implement a “harvest-now-and-decrypt-later” attack to record encrypted documents now with intent to decrypt them in the future. That means everything you send encrypted today will be able to be read retrospectively. Many applications – from ATMs to emails – would be vulnerable—unless we replace those algorithms with those that are “quantum-safe”.

When Will Current Cryptographic Systems Be Vulnerable?

The good news is that we’re nowhere near having any viable Cryptographically Relevant Quantum Computer, now or in the next few years. However, you can estimate when this will happen by calculating how many logical Qubits are needed to run Shor’s Algorthim and how long it will it take to break these crypto systems. There are lots of people tracking these numbers (see here and here). Their estimate is that using 8,194 logical qubits using 22.27 million physical qubits, it would take a quantum computer 20 minutes to break RSA-2048. The best estimate is that this might be possible in 8 to 20 years.

Post-Quantum / Quantum-Resistant Codes

That means if you want to protect the content you’re sending now, you need to migrate to new Post-Quantum /Quantum-Resistant Codes. But there are three things to consider in doing so:

  1. shelf-life time: the number of years the information must be protected by cyber-systems
  2. migration time: the number of years needed to properly and safely migrate the system to a quantum-safe solution
  3. threat timeline: the number of years before threat actors will be able to break the quantum-vulnerable systems

These new cryptographic systems would secure against both quantum and conventional computers and can interoperate with existing communication protocols and networks. The symmetric key algorithms of the Commercial National Security Algorithm (CNSA) Suite were selected to be secure for national security systems usage even if a CRQC is developed.

Cryptographic schemes that commercial industry believes are quantum-safe include lattice-based cryptography, hash trees, multivariate equations, and super-singular isogeny elliptic curves.

Estimates of when you can actually buy a fully error-corrected quantum computers vary from “never” to somewhere between 8 to 20 years from now. (Some optimists believe even earlier.)

Quantum Communication

Quantum communications quantum computers. A quantum network’s value comes from its ability to distribute entanglement. These communication devices manipulate the quantum properties of photons/particles of light to build Quantum Networks.

This market includes secure quantum key distribution, clock synchronization, random number generation and networking of quantum military sensors, computers, and other systems.

Quantum Cryptography/Quantum Key Distribution
Quantum Cryptography/Quantum Key Distribution can distribute keys between authorized partners connected by a quantum channel and a classical authenticated channel. It can be implemented via fiber optics or free space transmission. China transmitted entangled photons (at one pair of entangled particles per second) over 1,200 km in a satellite link, using the Micius satellite.

The Good: it can detect the presence of an eavesdropper, a feature not provided in standard cryptography. The Bad: Quantum Key Distribution can’t be implemented in software or as a service on a network and cannot be easily integrated into existing network equipment. It lacks flexibility for upgrades or security patches. Securing and validating Quantum Key Distribution is hard and it’s only one part of a cryptographic system.

The view from the National Security Agency (NSA) is that quantum-resistant (or post-quantum) cryptography is a more cost effective and easily maintained solution than quantum key distribution. NSA does not support the usage of QKD or QC to protect communications in National Security Systems. (See here.) They do not anticipate certifying or approving any Quantum Cryptography/Quantum Key Distribution security products for usage by National Security System customers unless these limitations are overcome. However, if you’re a commercial company these systems may be worth exploring.

Quantum Random Number Generators (GRGs)
Commercial Quantum Random Number Generators that use quantum effects (entanglement) to generate nondeterministic randomness are available today. (Government agencies can already make quality random numbers and don’t need these devices.)

Random number generators will remain secure even when a Cryptographically Relevant Quantum Computer is built.

Quantum Sensing and Metrology

Quantum sensors  Quantum computers.

This segment consists of Quantum Sensing (quantum magnetometers, gravimeters, …), Quantum Timing (precise time measurement and distribution), and Quantum Imaging (quantum radar, low-SNR imaging, …) Each of these areas can create entirely new commercial products or entire new industries e.g. new classes of medical devices and military systems, e.g. anti-submarine warfare, detecting stealth aircraft, finding hidden tunnels and weapons of mass destruction. Some of these are achievable in the near term.

Quantum Timing
First-generation quantum timing devices already exist as microwave atomic clocks. They are used in GPS satellites to triangulate accurate positioning. The Internet and computer networks use network time servers and the NTP protocol to receive the atomic clock time from either the GPS system or a radio transmission.

The next generation of quantum clocks are even more accurate and use laser-cooled single ions confined together in an electromagnetic ion trap. This increased accuracy is not only important for scientists attempting to measure dark matter and gravitational waves, but miniaturized/ more accurate atomic clocks will allow precision navigation in GPS- degraded/denied areas, e.g. in commercial and military aircraft, in tunnels and caves, etc.

Quantum Imaging
Quantum imaging is one of the most interesting and near-term applications. First generation magnetometers such as superconducting quantum interference devices (SQUIDs) already exist. New quantum sensor types of imaging devices use entangled light, accelerometers, magnetometers, electrometers, gravity sensors. These allow measurements of frequency, acceleration, rotation rates, electric and magnetic fields, photons, or temperature with levels of extreme sensitivity and accuracy.

These new sensors use a variety of quantum effects: electronic, magnetic, or vibrational states or spin qubits, neutral atoms, or trapped ions. Or they use quantum coherence to measure a physical quantity. Or use quantum entanglement to improve the sensitivity or precision of a measurement, beyond what is possible classically.

Quantum Imaging applications can have immediate uses in archeology,  and profound military applications. For example, submarine detection using quantum magnetometers or satellite gravimeters could make the ocean transparent. It would compromise the survivability of sea-based nuclear deterrent by detecting and tracking subs deep underwater.

Quantum sensors and quantum radar from companies like Rydberg can be game changers.

Gravimeters or quantum magnetometers could also detect concealed tunnels, bunkers, and nuclear materials. Magnetic resonance imaging could remotely ID chemical and biological agents. Quantum radar or LIDAR would enable extreme detection of electromagnetic emissions, enhancing ELINT and electronic warfare capabilities. It can use fewer emissions to get the same detection result, for better detection accuracy at the same power levels – even detecting stealth aircraft.

Finally, Ghost imaging uses the quantum properties of light to detect distant objects using very weak illumination beams that are difficult for the imaged target to detect. It can increase the accuracy and lessen the amount of radiation exposed to a patient during x-rays. It can see through smoke and clouds. Quantum illumination is similar to ghost imaging but could provide an even greater sensitivity.

National and Commercial Efforts
Countries across the world are making major investments ~$24 billion in 2021 – in quantum research and applications.

Lessons Learned

  • Quantum technologies are emerging and disruptive to companies and defense
  • Quantum technologies cover Quantum Computing, Quantum Communications and Quantum Sensing and Metrology
    • Quantum computing could obsolete existing cryptography systems
    • Quantum communication could allow secure cryptography key distribution and networking of quantum sensors and computers
    • Quantum sensors could make the ocean transparent for Anti-submarine warfare, create unjammable A2/AD, detect stealth aircraft, find hidden tunnels and weapons of mass destruction, etc.
  • A few of these technologies are available now, some in the next 5 years and a few are a decade or more out
  • Tens of billions of public and private capital dollars are being invested in them
  • Defense applications will come first
  • The largest commercial applications won’t be the ones we currently think they’re going to be
    • when they do show up they’ll destroy existing businesses and create new ones

The Semiconductor Ecosystem – Explained

The last year has seen a ton written about the semiconductor industry: chip shortages, the CHIPS Act, our dependence on Taiwan and TSMC, China, etc.

But despite all this talk about chips and semiconductors, few understand how the industry is structured. I’ve found the best way to understand something complicated is to diagram it out, step by step. So here’s a quick pictorial tutorial on how the industry works.


The Semiconductor Ecosystem

We’re seeing the digital transformation of everything. Semiconductors – chips that process digital information — are in almost everything: computers, cars, home appliances, medical equipment, etc. Semiconductor companies will sell $600 billion worth of chips this year.

Looking at the figure below, the industry seems pretty simple. Companies in the semiconductor ecosystem make chips (the triangle on the left) and sell them to companies and government agencies (on the right). Those companies and government agencies then design the chips into systems and devices (e.g. iPhones, PCs, airplanes, cloud computing, etc.), and sell them to consumers, businesses, and governments. The revenue of products that contain chips is worth tens of trillions of dollars.

Yet, given how large it is, the industry remains a mystery to most.  If you do think of the semiconductor industry at all, you may picture workers in bunny suits in a fab clean room (the chip factory) holding a 12” wafer. Yet it is a business that manipulates materials an atom at a time and its factories cost 10s of billions of dollars to build.  (By the way, that wafer has two trillion transistors on it.)

If you were able to look inside the simple triangle representing the semiconductor industry, instead of a single company making chips, you would find an industry with hundreds of companies, all dependent on each other. Taken as a whole it’s pretty overwhelming, so let’s describe one part of the ecosystem at a time.  (Warning –  this is a simplified view of a very complex industry.)

Semiconductor Industry Segments

The semiconductor industry has seven different types of companies. Each of these distinct industry segments feeds its resources up the value chain to the next until finally a chip factory (a “Fab”) has all the designs, equipment, and materials necessary to manufacture a chip. Taken from the bottom up these semiconductor industry segments are:

  1. Chip Intellectual Property (IP) Cores
  2. Electronic Design Automation (EDA) Tools
  3. Specialized Materials
  4. Wafer Fab Equipment (WFE)
  5. “Fabless” Chip Companies
  6. Integrated Device Manufacturers (IDMs)
  7. Chip Foundries
  8. Outsourced Semiconductor Assembly and Test (OSAT)

The following sections below provide more detail about each of these eight semiconductor industry segments.

Chip Intellectual Property (IP) Cores

  • The design of a chip may be owned by a single company, or…
  • Some companies license their chip designs – as software building blocks, called IP Cores – for wide use
  • There are over 150 companies that sell chip IP Cores
  • For example, Apple licenses IP Cores from ARM as a building block of their microprocessors in their iPhones and Computers

Electronic Design Automation (EDA) Tools

  • Engineers design chips (adding their own designs on top of any IP cores they’ve bought) using specialized Electronic Design Automation (EDA) software
  • The industry is dominated by three U.S. vendors – Cadence, Mentor (now part of Siemens) and Synopsys
  • It takes a large engineering team using these EDA tools 2-3 years to design a complex logic chip like a microprocessor used inside a phone, computer or server. (See the figure of the design process below.)

  • Today, as logic chips continue to become more complex, all Electronic Design Automation companies are beginning to insert Artificial Intelligence aids to automate and speed up the process

Specialized Materials and Chemicals

So far our chip is still in software. But to turn it into something tangible we’re going to have to physically produce it in a chip factory called a “fab.” The factories that make chips need to buy specialized materials and chemicals:

  • Silicon wafers – and to make those they need crystal growing furnaces
  • Over 100 Gases are used – bulk gases (oxygen, nitrogen, carbon dioxide, hydrogen, argon, helium), and other exotic/toxic gases (fluorine, nitrogen trifluoride, arsine, phosphine, boron trifluoride, diborane, silane, and the list goes on…)
  • Fluids (photoresists, top coats, CMP slurries)
  • Photomasks
  • Wafer handling equipment, dicing
  • RF Generators


Wafer Fab Equipment (WFE) Make the Chips

  • These machines physically manufacture the chips
  • Five companies dominate the industry – Applied Materials, KLA, LAM, Tokyo Electron and ASML
  • These are some of the most complicated (and expensive) machines on Earth. They take a slice of an ingot of silicon and manipulate its atoms on and below its surface
  • We’ll explain how these machines are used a bit later on

 “Fabless” Chip Companies

  • Systems companies (Apple, Qualcomm, Nvidia, Amazon, Facebook, etc.) that previously used off-the-shelf chips now design their own chips.
  • They create chip designs (using IP Cores and their own designs) and send the designs to “foundries” that have “fabs” that manufacture them
  • They may use the chips exclusively in their own devices e.g. Apple, Google, Amazon ….
  • Or they may sell the chips to everyone e.g. AMD, Nvidia, Qualcomm, Broadcom…
  • They do not own Wafer Fab Equipment or use specialized materials or chemicals
  • They do use Chip IP and Electronic Design Software to design the chips


Integrated Device Manufacturers (IDMs)

  • Integrated Device Manufacturers (IDMs) design, manufacture (in their own fabs), and sell their own chips
    • They do not make chips for other companies (this is changing rapidly – see here.)
    • There are three categories of IDMs– Memory (e.g. Micron, SK Hynix), Logic (e.g. Intel), Analog (TI, Analog Devices)
  • They have their own “fabs” but may also use foundries
    • They use Chip IP and Electronic Design Software to design their chips
    • They buy Wafer Fab Equipment and use specialized materials and chemicals
  • The average cost of taping out a new leading-edge chip (3nm) is now $500 million

 Chip Foundries

  • Foundries make chips for others in their “fabs”
  • They buy and integrate equipment from a variety of manufacturers
    • Wafer Fab Equipment and specialized materials and chemicals
  • They design unique processes using this equipment to make the chips
  • But they don’t design chips
  • TSMC in Taiwan is the leader in logic, Samsung is second
  • Other fabs specialize in making chips for analog, power, rf, displays, secure military, etc.
  • It costs $20 billon to build a new generation chip (3nm) fabrication plant

Fabs

  • Fabs are short for fabrication plants – the factory that makes chips
  • Integrated Device Manufacturers (IDMs) and Foundries both have fabs. The only difference is whether they make chips for others to use or sell or make them for themselves to sell.
  • Think of a Fab as analogous to a book printing plant (see figure below)
  1. Just as an author writes a book using a word processor, an engineer designs a chip using electronic design automation tools
  2. An author contracts with a publisher who specializes in their genre and then sends the text to a printing plant. An engineer selects a fab appropriate for their type of chip (memory, logic, RF, analog)
  3. The printing plant buys paper and ink. A fab buys raw materials; silicon, chemicals, gases
  4. The printing plant buys printing machinery, presses, binders, trimmers. The fab buys wafer fab equipment, etchers, deposition, lithography, testers, packaging
  5. The printing process for a book uses offset lithography, filming, stripping, blueprints, plate making, binding and trimming. Chips are manufactured in a complicated process manipulating atoms using etchers, deposition, lithography. Think of it as an atomic level offset printing. The wafers are then cut up and the chips are packaged
  6. The plant turns out millions of copies of the same book. The plant turns out millions of copies of the same chip

While this sounds simple, it’s not. Chips are probably the most complicated products ever manufactured.  The diagram below is a simplified version of the 1000+ steps it takes to make a chip.

Outsourced Semiconductor Assembly and Test (OSAT)

  • Companies that package and test chips made by foundries and IDMs
  • OSAT companies take the wafer made by foundries, dice (cut) them up into individual chips, test them and then package them and ship them to the customer

 

Fab Issues

  • As chips have become denser (with trillions of transistors on a single wafer) the cost of building fabs have skyrocketed – now >$10 billion for one chip factory
  • One reason is that the cost of the equipment needed to make the chips has skyrocketed
    • Just one advanced lithography machine from ASML, a Dutch company, costs $150 million
    • There are ~500+ machines in a fab (not all as expensive as ASML)
    • The fab building is incredibly complex. The clean room where the chips are made is just the tip of the iceberg of a complex set of plumbing feeding gases, power, liquids all at the right time and temperature into the wafer fab equipment
  • The multi-billion-dollar cost of staying at the leading edge has meant most companies have dropped out. In 2001 there were 17 companies making the most advanced chips.  Today there are only two – Samsung in Korea and TSMC in Taiwan.
    • Given that China believes Taiwan is a province of China this could be problematic for the West.

What’s Next – Technology

It’s getting much harder to build chips that are denser, faster, and use less power, so what’s next?

  • Instead of making a single processor do all the work, logic chip designers have put multiple specialized processors inside of a chip
  • Memory chips are now made denser by stacking them 100+ layers high
  • As chips are getting more complex to design, which means larger design teams, and longer time to market, Electronic Design Automation companies are embedding artificial intelligence to automate parts of the design process
  • Wafer equipment manufacturers are designing new equipment to help fabs make chips with lower power, better performance, optimum area-to-cost, and faster time to market

What’s Next – Business

The business model of Integrated Device Manufacturers (IDMs) like Intel is rapidly changing. In the past there was a huge competitive advantage in being vertically integrated i.e. having your own design tools and fabs. Today, it’s a disadvantage.

  • Foundries have economies of scale and standardization. Rather than having to invent it all themselves, they can utilize the entire stack of innovation in the ecosystem. And just focus on manufacturing
  • AMD has proven that it’s possible to shift from an IDM to a fabless foundry model. Intel is trying. They are going to use TSMC as a foundry for their own chips as well as set up their own foundry

What’s Next – Geopolitics

Controlling advanced chip manufacturing in the 21st century may well prove to be like controlling the oil supply in the 20th. The country that controls this manufacturing can throttle the military and economic power of others.

  • Ensuring a steady supply of chips has become a national priority. (China’s largest import by $’s are semiconductors – larger than oil)
  • Today, both the U.S. and China are rapidly trying to decouple their semiconductor ecosystems from each other; China is pouring $100+ billion of government incentives in building Chinese fabs, while simultaneously trying to create indigenous supplies of wafer fab equipment and electronic design automation software
  • Over the last few decades the U.S. moved most of its fabs to Asia. Today we are incentivizing bringing fabs and chip production back to the U.S.

An industry that previously was only of interest to technologists is now one of the largest pieces in great power competition.

What’s Plan B? – The Small, the Agile, and the Many

This post previously appeared in the Proceedings of the Naval Institute.


One of the most audacious and bold manifestos for the future of Naval innovation has just been posted by the Rear Admiral who heads up the Office of Naval Research. It may be the hedge we need to deter China in the South China Sea.


While You Were Out
In the two decades since 9/11, while the U.S. was fighting Al-Qaeda and ISIS, China built new weapons and developed new operational concepts to negate U.S. military strengths. They’ve built ICBMs with conventional warheads to hit our aircraft carriers. They converted reefs in international waters into airbases, creating unsinkable aircraft carriers that extend the range of their aircraft and are armed with surface to air missiles make it dangerous to approach China’s mainland and Taiwan.

To evade our own fleet air defense systems, they’ve armed their missiles with maneuvering warheads, and to reduce our reaction time they have missiles that travel at hypersonic speed.

The sum of these Chinese offset strategies means that in the South China Sea the U.S. can no longer deter a war because we can longer guarantee we can win one.

This does not bode well for our treaty allies, Japan, the Philippines, and South Korea. Control of the South China Sea would allow China to control fishing operations and oil and gas exploration; to politically coerce other countries bordering in the region; to enforce an air defense identification zone (ADIZ) over the South China Sea; or to enforce a blockade around Taiwan or invade it.

What To Do About It?
Today the Navy has aircraft carriers, submarines, surface combatants, aircraft, and sensors under the sea and in space. Our plan to counter to China can be summed up as, more of the same but better and more tightly integrated.

This might be the right strategy. However, what if we’re wrong? What if our assumptions about the survivability of these naval platforms and the ability of our marines to operate, were based on incorrect assumption about our investments in material, operational concepts and mental models?

If so, it might be prudent for the Navy to have a hedge strategy. Think of a hedge as a “just in case” strategy. It turns out the Navy had one in WWII. And it won the war in the Pacific.

War Plan Orange
In the 1930s U.S. war planners thought about a future war with Japan. The result was “War Plan Orange” centered on the idea that ultimately, American battleships would engage the Japanese fleet in a gunnery battle, which the U.S. would win.

Unfortunately for us Japan didn’t adhere to our war plan. They were bolder and more imaginative than we were. Instead of battleships, they used aircraft carriers to attack us. The U.S. woke up on Dec. 7, 1941, with most of our battleships sitting on the bottom of Pearl Harbor. The core precept of War Plan Orange went to the bottom with it.

But the portfolio of options available to Admiral Nimitz and President Roosevelt were not limited to battleships. They had a hedge strategy in place in case the battleships were not the solution. The hedges? Aircraft carriers and submarines.

While the U.S. Navy’s primary investment pre-WW2 was in battleships, the Navy had also made a substantial alternative investment – in aircraft carriers and submarines. The Navy launched the first aircraft carrier in 1920. For the next two decades they ran fleet exercises with them. At the beginning of the war the U.S. Navy had seven aircraft carriers (CVs) and one aircraft escort vessel (AVG). By the end of the war the U.S. had built 111 carriers. (24 fleet carriers, 9 light carriers and 78 escort carriers.) 12 were sunk.

As it turned out, it was carriers, subs, and the Marines who won the Pacific conflict.

Our Current Plan
Fast forward to today. For the last 80 years the carriers in a Carrier Strike Group and submarines remain the preeminent formation for U.S. naval warfare.

China has been watching us operate and fight in this formation for decades. But what if carrier strike groups can no longer win a fight? What if the U.S. is underestimating China’s capabilities, intents, imagination, and operating concepts? What if they can disable or destroy our strike groups (via cyber, conventionally armed ICBMs, cruise missiles, hypersonics, drones, submarines, etc.)? If that’s a possibility, then what is the Navy’s 21st-century hedge? What is its Plan B?

Says Who?
Here’s where this conversation gets interesting. While I have an opinion, think tanks have an opinion, and civilians in the Pentagon have an opinion, RAdm Lorin Selby, the Chief of the Office of Naval Research (ONR), has more than just “an opinion.” ONR is the Navy’s science and technology systems command. Its job is to see over the horizon and think about what’s possible. Selby was previously deputy commander of the Naval Sea Systems Command (NAVSEA) and commander of the Naval Surface Warfare Centers (NSWC). As the chief engineer of the Navy, he was the master of engineering the large and the complex.

What follows is my paraphrasing RADM Selby’s thinking about a hedge strategy the Navy needs and how they should get there.

Diversification
A hedge strategy is built on the premise that you invest in different things, not more or better versions of the same.

If you look at the Navy force structure today and its plan for the next decade, at first glance you might say they have a diversified portfolio and a plan for more. The Navy has aircraft carriers, submarines, surface combatants, and many types of aircraft. And they plan for a distributed fleet architecture, including 321 to 372 manned ships and 77 to 140 large, unmanned vehicles.

But there is an equally accurate statement that this is not a diversified portfolio because all these assets share many of the same characteristics:

  • They are all large compared to their predecessors
  • They are all expensive – to the point where the Navy can’t afford the number of platforms our force structure assessments suggest they need
  • They are all multi-mission and therefore complex
  • The system-to-system interactions to create these complex integrations drive up cost and manufacturing lead times
  • Long manufacturing lead times mean they have no surge capacity
  • They are acquired on a requirements model that lags operational identification of need by years…sometimes decades when you fold in the construction span times for some of these complex capabilities like carriers or submarines
  • They are difficult to modernize – The ability to update the systems aboard these platforms, even the software systems, still takes years to accomplish

If the primary asset of the U.S. fleet now and in the future is the large and the complex, then surely there must be a hedge, a Plan B somewhere? (Like the pre-WW2 aircraft carriers.)  In fact, there isn’t. The Navy has demos of alternatives, but there is no force structure built on a different set of principles that would complicate China’s plans and create doubt in our adversaries of whether they could prevail in a conflict.

The Hedge Strategy – Create “the small, the agile, and the many”
In a world where the large and the complex are either too expensive to generate en masse or potentially too vulnerable to put at risk, “the small, the agile, and the many” has the potential to define the future of Navy formations.

We need formations composed of dozens, hundreds, or even thousands of unmanned vehicles above, below, and on the ocean surface. We need to build collaborating, autonomous formations…NOT a collection of platforms.

This novel formation is going to be highly dependent on artificial intelligence and new software that enables cross-platform collaboration and human machine teaming.

To do this we need a different world view. One that is no longer tied to large 20th-century industrial systems, but to a 21st-century software-centric agile world.

The Selby Manifesto:

  • Digitally adept naval forces will outcompete forces organized around principle of industrial optimization. “Data is the new oil and software is the new steel”
  • The systems engineering process we have built over the last 150 years is not optimal for software-based systems.
    • Instead, iterative design approaches dominate software design
  • The Navy has world-class engineering and acquisition processes to deal with hardware
    • but applying the same process and principles to digital systems is a mistake
  • The design principles that drive software companies are fundamentally different than those that drive industrial organizations.
  • Applying industrial-era principles to digital era technologies is a recipe for failure
  • The Navy has access to amazing capabilities that already exist. And part of our challenge will be to integrate those capabilities together in novel ways that allow new modes of operation and more effectiveness against operational priorities
  • There’s an absolute need to foster a collaborative partnership with academia and businesses – big businesses, small businesses, and startups
  • This has serious implication of how the Navy and Marine Corps needs to change. What do we need to change when it comes to engineering and operating concepts?

How To Get “The Small, The Agile, and The Many” Tested and In The Water?
Today, “the small, the agile and the many” have been run in war games, exercises, simulations, and small demonstrations, but not built at scale in a formation of dozens, hundreds, or even thousands of unmanned vehicles above, below and on the ocean’s surface. We need to prove whether these systems can fight alongside our existing assets (or independently if required).

ONR plans to rapidly prove that this idea works, and that the Navy can build it. Or they will disprove the theory. Either way the Navy needs to know quickly whether they have a hedge. Time is not on our side in the South China Sea.

ONR’s plan is to move boldly. They’re building this new “small, the agile, and the many”formation on digital principles and they’re training a new class of program managers – digital leaders – to guide the journey through the complex software and data.

They are going to partner with industry using rapid, simple, and accountable acquisition processes, using it to get through the gauntlet of discussions to contract in short time periods so we can get to work. And these processes are going to excite new partners and allies.

They’re going to use all the ideas already on the shelves, whether government shelves or commercial shelves, and focus on what can be integrated and then what must be invented.

All the while they’ve been talking to commanders in fleets around the world. And taking a page from digital engineering practices, instead of generating a list of requirements, they’re building to the operational need by asking “what is the real problem?” They are actively listening, using Lean and design thinking to hear and understand the problems, to build a minimal viable product – a prototype solution – and get it into the water. Then asking, did that solve the problem…no? Why not? Okay, we are going to go fix it and come back in a few months, not years.

The goal is to demonstrate this novel naval formation virtually, digitally, and then physically with feedback from in water experiments. Ultimately the goal is getting agile prototyping out to sea and doing it faster than ever before.

In the end the goal is to effectively evaluate the idea of the small, the agile, and the many. How to iterate at scale and at speed. How to take things that meet operational needs and make them part of the force structure, deploying them in novel naval formations, learning their operational capabilities, not just their technical merits. If we’re successful, then we can help guarantee the rest of century.

What Can Go Wrong?
During the Cold War the U.S. prided itself on developing offset strategies, technical or operational concepts that leapfrogged the Soviet Union. Today China has done that to us. They’ve surprised us with multiple offset strategies, and more are likely to come. The fact is that China is innovating faster than the Department of Defense, they’ve gotten inside our DoD OODA loop.

But China is not innovating faster than our nation as a whole. Innovation in our commercial ecosystem — in AI, machine learning, autonomy, commercial access to space, cyber, biotech, semiconductors (all technologies the DoD and Navy need) — continues to solve the toughest problems at speed and scale, attracting the best and the brightest with private capital that dwarfs the entire DoD R&E (research and engineering) budget.

RADM Selby’s plan of testing the hedge of “the small, the agile, and the many” using tools and technologies of the 21st century is exactly the right direction for the Navy.

However, in peacetime bold, radical ideas are not welcomed. They disrupt the status quo. They challenge existing reporting structures, and in a world of finite budgets, money has to be taken from existing programs and primes or programs even have to be killed to make the new happen. Even when positioned as a hedge, existing vendors, existing Navy and DoD organizations, existing political power centers, will all see “the small, the agile, and the many” as a threat. It challenges careers, dollars, and mindsets. Many will do their best to impede, kill or co-opt this idea.

We are outmatched in the South China Sea. And the odds are getting longer each year. In a war with China we won’t have years to rebuild our Navy.

A crisis is an opportunity to clear out the old to make way for the new. If senior leadership of the Navy, DoD, executive branch, and Congress truly believe we need to win this fight, that this is a crisis, then ONR and “the small, the agile, and the many” needs a direct report to the Secretary of the Navy and the budget and authority to make this happen.

The Navy and the country need a hedge. Let’s get started now.

The Gordian Knot Center for National Security Innovation at Stanford

penitus cogitare, cito agere – think deeply, act quickly

75 years ago, the Office of Naval Research (ONR) helped kickstart innovation in Silicon Valley with a series of grants to Fred Terman, Dean of Stanford’s Engineering school. Terman used the money to set up the Stanford Electronics Research Lab. He staffed it with his lab managers who built the first electronic warfare and electronic intelligence systems in WWII. This lab pushed the envelope of basic and applied research in microwave devices and electronics and within a few short years made Stanford a leader in these fields. The lab became ground zero for the wave of Stanford’s entrepreneurship and innovation in the 1950’s and 60’s and helped form what would later be called Silicon Valley.

75 years later, ONR just laid down a bet again, one we believe will be equally transformative. They’re the first sponsors of the new Gordian Knot Center for National Security Innovation at Stanford that Joe Felter, Raj Shah, and I have started.


Gordian What?

A Gordian Knot is a metaphor for an intractable problem. Today, the United States is facing several seemingly intractable national security problems simultaneously.

We intend to help solve them in Stanford’s Gordian Knot Center for National Security Innovation. Our motto of penitus cogitare, cito agere, think deeply, act quickly, embraces our unique intersection of deep problem understanding, combined with rapid solutions. The Center combines six unique strengths of Stanford and its location in Silicon Valley.

  1. The insights and expertise of Stanford international and national security policy leaders
  2. The technology insights and expertise of Stanford Engineering
  3. Exceptional students willing to help the country win the Great Power Competition
  4. Silicon Valley’s deep technology ecosystem
  5. Our experience in rapid problem understanding, rapid iteration and deployment of solutions with speed and urgency
  6. Access to risk capital at scale

Our focus will match our motto. We’re going to coordinate resources at Stanford and peer universities, and across Silicon Valley’s innovation ecosystem to:

  • Scale national security innovation education
  • Train national security innovators
  • Offer insight, integration, and policy outreach
  • Provide a continual output of minimal viable products that can act as catalysts for solutions to the toughest problems

Why Now? Why Us?

Over the last decade we’ve created a series of classes in entrepreneurship, rapid innovation, and national security: Lean LaunchPad; National Science Foundation I-Corps; Hacking for Defense; Hacking for Diplomacy; Technology, Innovation and Modern War last year; and this year Technology, Innovation and Great Power Competition. These classes have been widely adopted, across the U.S. and globally.

Simultaneously, each of us was actively engaged in helping different branches of the government understand, react, and deliver solutions in a rapidly changing and challenging environment. It’s become clear to us that for the first time in three decades, the U.S. is now engaged in a Great Power Competition. And we’re behind. Our national power (our influence and footprint on the world stage) is being challenged and effectively negated by autocratic regimes like China and Russia.

GKC joins a select group of national security think tanks

At Stanford, the Gordian Knot Center will sit in the Freeman Spogli Institute for International Studies run by Mike McFaul, ex ambassador to Russia. And Mike has graciously agreed to be our Principal Investigator along with Riitta Katila in the Management Science and Engineering Department (MS&E) in the Engineering School. MS&E is where disruptive technology meets national security, and has a long history of brilliant contributions from Bill Perry, Sig Hecker and Elisabeth Pate-Cornell and others. (Stanford’s other policy institute is the Hoover Institution, run by Condoleezza Rice, ex secretary of state). All are world-class leaders in understanding international problems, policies, and institutions. Other U.S. foreign affairs and national security think tanks include:

We intend to focus the new Center on solving problems across the spectrum of activities that create and sustain national power. National power is the combination of a country’s diplomacy (soft power and alliances), information, military and economic strength as well as its finance, intelligence, and law enforcement – or DIME-FIL. Our projects will be those at the intersection of DIME-FIL with the onslaught of commercial technologies (AI, machine learning, autonomy, biotech, cyber, semiconductors, commercial access to space, et al.). And we’re going to hit the ground running by moving our two national security classes — Hacking for Defense, and Technology Innovation and Great Power Competition (which this year is now a required course in the International Policy program) — into the Center.

We hope our unique charter, “think deeply, act quickly” can complement the extraordinary work these other institutions provide.

The Office of Naval Research (ONR)

The Office of Naval Research (ONR) has been planning, fostering, and encouraging scientific research—and reimagining naval power—since 1946. The grants it made to Stanford that year were the first to any university.

Today, the Navy and the U.S. Marine Corps is looking to find ways to accelerate technology development and delivery to our naval forces. There is broad consensus that the current pace of technology development and adoption is unsatisfactory, and that without significant reform, we will lose the competition with China in the South China Sea for maritime superiority.

Rear Admiral Selby, Chief of Naval Research, has recognized that it’s no longer “business as usual.” That ONR delivering sustaining innovations for the existing fleet and marine forces is no longer good enough to deter war or keep us in the fight. And that ONR once again needs to lead with disruptive technologies, new operational concepts, new types of program management and mindsets. He’s on a mission to provide the Navy and U.S. Marine Corps with just that. When we approached him about the idea of the Gordian Knot Center he reminded us, that not only did ONR sponsor Stanford in 1946, they’ve been sponsoring our Hacking for Defense class since 2016!  Now they’ve become our charter sponsor for the Gordian Knot Center.

We hope to earn it – for him, ONR, and the country.

Steve, Joe and Raj

Lessons Learned

The Center combines six unique strengths of Stanford and its location in Silicon Valley

  • The insights and expertise of Stanford international and national security policy leaders
  • The technology insights and expertise of Stanford Engineering
  • Exceptional students willing to help the country win the Great Power Competition
  • Silicon Valley’s deep technology ecosystem
  • Our experience in rapid problem understanding, rapid iteration and deployment of solutions with speed and urgency
  • Access to risk capital at scale

Our focus will match our motto. We’re going to coordinate resources at Stanford and peer universities and across Silicon Valley’s innovation ecosystem to:

  • Scale national security innovation education
  • Train national security innovators
  • Offer insight, integration, and policy outreach
  • Provide a continual output of minimal viable products that can act as catalysts for solutions to the toughest problems