Hacking for Defense @ Stanford 2025 – Lessons Learned Presentations

The videos and PowerPoints embedded in this post are best viewed on steveblank.com

We just finished our 10th annual Hacking for Defense class at Stanford.

What a year.

Hacking for Defense, now in 70 universities, has teams of students working to understand and help solve national security problems. At Stanford this quarter the 8 teams of 41 students collectively interviewed 1106 beneficiaries, stakeholders, requirements writers, program managers, industry partners, etc. – while simultaneously building a series of minimal viable products and developing a path to deployment.

This year’s problems came from the U.S. Army, U.S. Navy, CENTCOM, Space Force/Defense Innovation Unit, the FBI, IQT, and the National Geospatial-Intelligence Agency.

We opened this year’s final presentations session with inspiring remarks by Joe Lonsdale on the state of defense technology innovation and a call to action for our students. During the quarter guest speakers in the class included former National Security advisor H.R. McMaster, Jim Mattis ex Secretary of Defense, John Cogbill Deputy Commander 18th Airborne Corps, Michael Sulmeyer former Assistant Secretary of Defense for Cyber Policy, and John Gallagher Managing Director of Cerberus Capital.

“Lessons Learned” Presentations
At the end of the quarter, each of the eight teams gave a final “Lessons Learned” presentation along with a 2-minute video to provide context about their problem. Unlike traditional demo days or Shark Tanks which are, “Here’s how smart I am, and isn’t this a great product, please give me money,” the Lessons Learned presentations tell the story of each team’s 10-week journey and hard-won learning and discovery. For all of them it’s a roller coaster narrative describing what happens when you discover that everything you thought you knew on day one was wrong and how they eventually got it right.
While all the teams used the Mission Model Canvas, Customer Development and Agile Engineering to build Minimal Viable Products, each of their journeys was unique.

This year we had the teams add two new slides at the end of their presentation: 1) tell us which AI tools they used, and 2) their estimate of progress on the Technology Readiness Level and Investment Readiness Level.

Here’s how they did it and what they delivered.

Team Omnyra – improving visibility into AI-generated bioengineering threats.

If you can’t see the team Omnyra summary video click here

If you can’t see the Omnyra presentation click here

These are “Wicked” Problems
Wicked problems refer to really complex problems, ones with multiple moving parts, where the solution isn’t obvious and lacks a definitive formula. The types of problems our Hacking For Defense students work on fall into this category. They are often ambiguous. They start with a problem from a sponsor, and not only is the solution unclear but figuring out how to acquire and deploy it is also complex. Most often students find that in hindsight the problem was a symptom of a more interesting and complex problem – and that Acquisition of solutions in the Dept of Defense is unlike anything in the commercial world. And the stakeholders and institutions often have different relationships with each other – some are collaborative, some have pieces of the problem or solution, and others might have conflicting values and interests.
The figure shows the types of problems Hacking for Defense students encounter, with the most common ones shaded.

Team HydraStrike – bringing swarm technology to the maritime domain.

If you can’t see the HydraStrike summary video click here.


If you can’t see the HydraStrike presentation click here

Mission-Driven Entrepreneurship
This class is part of a bigger idea – Mission-Driven Entrepreneurship. Instead of students or faculty coming in with their own ideas, we ask them to work on societal problems, whether they’re problems for the State Department or the Department of Defense or non-profits/NGOs  or the Oceans and Climate or for anything the students are passionate about. The trick is we use the same Lean LaunchPad / I-Corps curriculum — and the same class structure – experiential, hands-on– driven this time by a mission-model not a business model. (The National Science Foundation and the Common Mission Project have helped promote the expansion of the methodology worldwide.)
Mission-driven entrepreneurship is the answer to students who say, “I want to give back. I want to make my community, country or world a better place, while being challenged to solve some of the toughest problems.”

Team HyperWatch – tracking hypersonic threats.

If you can’t see the HyperWatch video click here

If you can’t see the HyperWatch presentation click here

It Started With An Idea
Hacking for Defense has its origins in the Lean LaunchPad class I first taught at Stanford in 2011. I observed that teaching case studies and/or how to write a business plan as a capstone entrepreneurship class didn’t match the hands-on chaos of a startup. Furthermore, there was no entrepreneurship class that combined experiential learning with the Lean methodology. Our goal was to teach both theory and practice. The same year we started the class, it was adopted by the National Science Foundation to train Principal Investigators who wanted to get a federal grant for commercializing their science (an SBIR grant.) The NSF observed, “The class is the scientific method for entrepreneurship. Scientists understand hypothesis testing” and relabeled the class as the NSF I-Corps (Innovation Corps). I-Corps became the standard for science commercialization for the National Science Foundation, National Institutes of Health and the Department of Energy, to date training 3,051 teams and launching 1,300+ startups.

Team ChipForce – Securing U.S. dominance in critical minerals.

If you can’t see the ChipForce video click here

If you can’t see the ChipForce presentation click here
Note: After briefing the Department of Commerce, the Chipforce was offered jobs with the department.

Origins Of Hacking For Defense
In 2016, brainstorming with Pete Newell of BMNT and Joe Felter at Stanford, we observed that students in our research universities had little connection to the problems their government was trying to solve or the larger issues civil society was grappling with. As we thought about how we could get students engaged, we realized the same Lean LaunchPad/I-Corps class would provide a framework to do so. That year we launched both Hacking for Defense and Hacking for Diplomacy (with Professor Jeremy Weinstein and the State Department) at Stanford. The Department of Defense adopted and scaled Hacking for Defense across 60 universities while Hacking for Diplomacy has been taught at Georgetown, James Madison University, Rochester Institute for Technology, University of Connecticut and now Indiana University, sponsored by the Department of State Bureau of Diplomatic Security (see here).

Team ArgusNet – instant geospatial data for search and rescue.

If you can’t see the ArgusNet video click here

If you can’t see the ArgusNet presentation click here

Goals for Hacking for Defense
Our primary goal for the class was to teach students Lean Innovation methods while they engaged in national public service.
In the class we saw that students could learn about the nation’s threats and security challenges while working with innovators inside the DoD and Intelligence Community. At the same time the experience would introduce to the sponsors, who are innovators inside the Department of Defense (DOD) and Intelligence Community (IC), a methodology that could help them understand and better respond to rapidly evolving threats. We wanted to show that if we could get teams to rapidly discover the real problems in the field using Lean methods, and only then articulate the requirements to solve them, defense acquisition programs could operate at speed and urgency and deliver timely and needed solutions.
Finally, we wanted to familiarize students with the military as a profession and help them better understand its expertise, and its proper role in society. We hoped it would also show our sponsors in the Department of Defense and Intelligence community that civilian students can make a meaningful contribution to problem understanding and rapid prototyping of solutions to real-world problems.

Team NeoLens – AI-powered troubleshooting for military mechanics.

If you can’t see the NeoLens video click here

If you can’t see the NeoLens presentation click here

Go-to-Market/Deployment Strategies
The initial goal of the teams is to ensure they understand the problem. The next step is to see if they can find mission/solution fit (the DoD equivalent of commercial product/market fit.) But most importantly, the class teaches the teams about the difficult and complex path of getting a solution in the hands of a warfighter/beneficiary. Who writes the requirement? What’s an OTA? What’s color of money? What’s a Program Manager? Who owns the current contract? …

Team Omnicomm – improving the quality, security and resiliency of communications for special operations units.

If you can’t see the Omnicomm video click here


If you can’t see the Omnicomm presentation click here

Mission-Driven in 70 Universities and Continuing to Expand in Scope and Reach
What started as a class is now a movement.
From its beginning with our Stanford class, Hacking for Defense is now offered in over 70 universities in the U.S., as well as in the UK as Hacking for the MOD and in Australia. In the U.S., the course is a program of record and supported by Congress, H4D is sponsored by the Common Mission Project, Defense Innovation Unit (DIU), and the Office of Naval Research (ONR). Corporate partners include Boeing, Northrop Grumman and Lockheed Martin.
Steve Weinstein started Hacking for Impact (Non-Profits) and Hacking for Local (Oakland) at U.C. Berkeley, and Hacking for Oceans at bot Scripps and UC Santa Cruz, as well as Hacking for Climate and Sustainability at Stanford. Jennifer Carolan started Hacking for Education at Stanford.

Team Strom – simplified mineral value chain.

If you can’t see the Strom video click here

If you can’t see the Strom presentation click here

What’s Next For These Teams?
.When they graduate, the Stanford students on these teams have the pick of jobs in startups, companies, and consulting firms .This year, seven of our teams applied to the Defense Innovation Unit accelerator – the DIU Defense Innovation Summer Fellows Program – Commercialization Pathway. Seven were accepted. This further reinforced our thinking that Hacking for Defense has turned into a pre-accelerator – preparing students to transition their learning from the classroom to deployment

See the teams present in person here

It Takes A Village
While I authored this blog post, this class is a team project. The secret sauce of the success of Hacking for Defense at Stanford is the extraordinary group of dedicated volunteers supporting our students in so many critical ways.

The teaching team consisted of myself and:

  • Pete Newell, retired Army Colonel and ex Director of the Army’s Rapid Equipping Force, now CEO of BMNT.
  • Joe Felter, retired Army Special Forces Colonel; and former deputy assistant secretary of defense for South Asia, Southeast Asia, and Oceania; and currently the Director of the Gordian Knot Center for National Security Innovation at Stanford which we co-founded in 2021.
  • Steve Weinstein, partner at America’s Frontier Fund, 30-year veteran of Silicon Valley technology companies and Hollywood media companies. Steve was CEO of MovieLabs, the joint R&D lab of all the major motion picture studios.
  • Chris Moran, Executive Director and General Manager of Lockheed Martin Ventures; the venture capital investment arm of Lockheed Martin.
  • Jeff Decker, a Stanford researcher focusing on dual-use research. Jeff served in the U.S. Army as a special operations light infantry squad leader in Iraq and Afghanistan.

Our teaching assistants this year were Joel Johnson, Rachel Wu, Evan Twarog, Faith Zehfuss, and Ethan Hellman.

31 Sponsors, Business and National Security Mentors
The teams were assisted by the originators of their problems – the sponsors.

Sponsors gave us their toughest national security problems: Josh Pavluk, Kari Montoya, Nelson Layfield, Mark Breier, Jason Horton, Stephen J. Plunkett, Chris O’Connor, David Grande, Daniel Owins, Nathaniel Huston, Joy Shanaberger, and David Ryan.
National Security Mentors helped students who came into the class with no knowledge of the Department of Defense, and the FBI understand the complexity, intricacies and nuances of those organizations: Katie Tobin, Doug Seich, Salvadore Badillo-Rios, Marco Romani, Matt Croce, Donnie Hasseltine, Mark McVay, David Vernal, Brad Boyd, Marquay Edmonson.
Business Mentors helped the teams understand if their solutions could be a commercially successful business: Diane Schrader, Marc Clapper, Laura Clapper, Eric Byler, Adam Walters, Jeremey Schoos, Craig Seidel, Rich “Astro” Lawson.

Thanks to all!

Teaching National Security Policy with AI

The videos embedded in this post are best viewed on steveblank.com

International Policy students will be spending their careers in an AI-enabled world. We wanted our students to be prepared for it. This is why we’ve adopted and integrated AI in our Stanford national security policy class – Technology, Innovation and Great Power Competition.

Here’s what we did, how the students used it, and what they (and we) learned.


Technology, Innovation and Great Power Competition is an international policy class at Stanford (taught by me, Eric Volmar and Joe Felter.) The course provides future policy and engineering leaders with an appreciation of the geopolitics of the U.S. strategic competition with great power rivals and the role critical technologies are playing in determining the outcome.

This course includes all that you would expect from a Stanford graduate-level class in the Masters in International Policy – comprehensive readings, guest lectures from current and former senior policy officials/experts, and deliverables in the form of written policy papers. What makes the class unique is that this is an experiential policy class. Students form small teams and embark on a quarter-long project that got them out of the classroom to:

  • select a priority national security challenge, and then …
  • validate the problem and propose a detailed solution tested against actual stakeholders in the technology and national security ecosystem

The class combines multiple teaching tools.

  • Real world – Students worked in teams on real problems from government sponsors
  • Experiential – They get out of the building to interview 50+ stakeholders
  • Perspectives – They get policy context and insights from lectures by experts
  • And this year… Using AI to Accelerate Learning

Rationale for AI
Using this quarter to introduce AI we had three things going for us: 1) By fall 2024 AI tools were good and getting exponentially better, 2) Stanford had set up an AI Playground enabling students to use a variety of AI Tools (ChatGPT, Claude, Perplexity, NotebookLM, Otter.ai, Mermaid, Beautiful.ai, etc.) and 3) many students were using AI in classes but it was usually ambiguous about what they were allowed to do.

Policy students have to read reams of documents weekly. Our hypotheses was that our student teams could use AI to ingest and summarize content, identify key themes and concepts across the content, provide an in-depth analysis of critical content sections, and then synthesize and structure their key insights and apply their key insights to solve their specific policy problem.  They did all that, and much, much, more.

While Joe Felter and I had arm-waved “we need to add AI to the class” Eric Volmar was the real AI hero on the teaching team. As an AI power user Eric was most often ahead of our students on AI skills. He threw down a challenge to the students to continually use AI creatively and told them that they would be graded on it. He pushed them hard on AI use in office hours throughout the quarter. The results below speak for themselves.

If you’re not familiar with these AI tools in practice it’s worth watching these one minute videos.

Team OSC
Team OSC was trying to understand what is the appropriate level of financial risk for the U.S. Department of Defense to provide loans or loan guarantees in technology industries?

The team started using AI to do what we had expected, summarizing the stack of weekly policy documentsusing Claude 3.5. And like all teams, the unexpected use of AI was to create new leads for their stakeholder interviews. They found that they could ask AI to give them a list of leaders that were involved in similar programs, or that were involved in their program’s initial stages of development.

See how Team OSC summarized policy papers here:

If you can’t see the video click here

Claude was also able to create a list of leaders with the Department of Energy Title17 credit programs, Exim DFC, and other federal credit programs that the team should interview. In addition, it created a list of leaders within Congressional Budget Office and the Office of Management and Budget that would be able to provide insights. See the demo here:

If you can’t see the video click here
The team also used AI to transcribe podcasts. They noticed that key leaders of the organizations their problem came from had produced podcasts and YouTube videos. They used Otter.ai to transcribe these. That provided additional context for when they did interview them and allowed the team to ask insightful new questions.

If you can’t see the video click here

Note the power of fusing AI with interviews. The interviews ground the knowledge in the teams lived experience.

The team came up with a use case the teaching team hadn’t thought of – using AI to critique the team’s own hypotheses. The AI not only gave them criticism but supported it with links from published scholars. See the demo here:

If you can’t see the video click here

Another use the teaching team hadn’t thought was using Mermaid AI to create graphics for their weekly presentations. See the demo here:

If you can’t see the video click here

The surprises from this team kept coming. Their last was that the team used Beautiful.ai in order to generate PowerPoint presentations. See the demo here:

If you can’t see the video click here

For all teams, using AI tools was a learning/discovery process all its own. By and large, students were largely unfamiliar with most tools on day 1.

Team OSC suggested that students should start using AI tools early in the quarter and experiment with tools like ChatGPT, Otter.ai. Tools that that have steep learning curves, like Mermaid should be started at the very start of the project to train their models.

Team OSC AI tools summary: AI tools are not perfect, so make sure to cross check summaries, insights and transcriptions for accuracy and relevancy. Be really critical of their outputs. The biggest takeaway is that AI works best when prepared with human efforts.

Team FAAST
The FAAST team was trying to understand how can the U.S. improve and scale the DoE FASST program in the urgent context of great power competition?

Team FAAST started using AI to do what we had expected, summarizing the stack of weekly policy documents they were assigned to read and synthesizing interviews with stakeholders.

One of the features of ChatGPT this team appreciated, and important for a national security class, was the temporary chat feature –  data they entered would not be used to train the open AI models. See the demo below.

If you can’t see the video click here

The team used AI do a few new things we didn’t expect –  to generate emails to stakeholders and to create interview questions. During the quarter the team used ChatGPT, Claude, Perplexity, and NotebookLM. By the end of the 10-week class they were using AI to do a few more things we hadn’t expected. Their use of AI expanded to include simulating interviews. They gave ChatGPT specific instructions on who they wanted it to act like, and it provided personalized and custom answers. See the example here.

If you can’t see the video click here

Learning-by-doing was a key part of this experiential course. The big idea is that students learn both the method and the subject matter together. By learning it together, you learn both better.

Finally, they used AI to map stakeholders, get advice on their next policy move, and asked ChatGPT to review their weekly slides (by screenshotting the slides and putting them into ChatGPT and asking for feedback and advice.)

The FAAST team AI tool summary: ChatGPT was specifically good when using images or screenshots, so in these multi-level tasks, and when you wanted to use kind of more custom instructions, as we used for the stakeholder interviews.  Claude was better at more conversational and human in writing, so used it when sending emails. Perplexity was better for researchers because it provides citations, so you’re able to access the web and actually get directed to the source that it’s citing. NotebookLM was something we tried out, but it was not as successful. It was a cool tool that allowed us to summarize specific policy documents into a podcast, but the summaries were often pretty vague.

Team NSC Energy
Team NSC Energy was working on a National Security Council problem, “How can the United States generate sufficient energy to support compute/AI in the next 5 years?”

At the start of the class, the team began by using ChatGPT to summarize their policy papers and generate tailored interview questions, while Claude was used to synthesize research  for background understanding. As ChatGPT occasionally hallucinated information, by the end of the class they were cross validating the summaries via Perplexity pro.

The team also used ChatGPT and Mermaid to organize their thoughts and determine who they wanted to talk to. ChatGPT was used to generate code to put into the Mermaid flowchart organizer. Mermaid has its own language, so ChatGPT was helpful, so we didn’t have to learn all the syntax for this language.
See the video of how Team NSC Energy used ChaptGPT and Mermaid here:

If you can’t see the video click here

Team Alpha Strategy
The Alpha Strategy team was trying to discover whether the U.S. could use AI to create a whole-of-government decision-making factory.

At the start of class, Team Alpha Strategy used ChatGPT.40 for policy document analysis and summary, as well for stakeholder mapping. However, they discovered going one by one through the countless numbers of articles was time consuming. So the team pivoted to using Notebook LM, for document search and cross analysis. See the video of how Team Alpha Strategy used Notebook LM here:

If you can’t see the video click here

The other tools the team used were custom Gpts to build stakeholder maps and diagrams and organize interview notes. There’s going to be a wide variety of specialized Gpts. One that was really helpful, they said, was a scholar GPT.
See the video of how Team Alpha Strategy used custom GPTs:

If you can’t see the video click here

Like other teams, Alpha Strategy used ChatGPT to summarize their interview notes and to create flow charts to paste into their weekly presentations.

Team Congress
The Congress team was exploring the question, “if the Department of Defense were given economic instruments of power, which tools would be most effective in the current techno-economic competition with the People’s Republic of China?”

As other teams found, Team Congress first used ChatGPT to extract key themes from hundreds of pages of readings each week and from press releases, articles, and legislation. They also used for mapping and diagramming to identify potential relationships between stakeholders, or to creatively suggest alternate visualizations.

When Team Congress weren’t able to reach their sponsor in the initial two weeks of the class, much like Team OSC, they used AI tools to pretend to be their sponsor, a member of the defense modernization caucus. Once they realized its utility, they continued to do mock interviews using AI role play.

The team also used customized models of ChatGPT but in their case found that this was limited in the number of documents they could upload, because they had a lot of content. So they used retrieval augmented generation, which takes in a user’s query, and matches it with relevant sources in their knowledge base, and fed that back out as the output. See the video of how Team Congress used retrieval augmented generation here:

If you can’t see the video click here

Team NavalX
The NavalX team was learning how the U.S. Navy could expand its capabilities in Intelligence, Surveillance, and Reconnaissance (ISR) operations on general maritime traffic.

Like all teams they used ChatGPT to summarize and extract from long documents, organizing their interview notes, and defining technical terms associated with their project. In this video, note their use of prompting to guide ChatGPT to format their notes.

See the video of how Team NavalX used tailored prompts for formatting interview notes here:

If you can’t see the video click here

They also asked ChatGPT to role play a critic of our argument and solution so that we could find the weaknesses. They also began uploading many interviews at once, and asked Claude to find themes or ideas in common that they might have missed on their own.

Here’s how the NavalX team used Perplexity for research.

If you can’t see the video click here
Like other teams, the NavalX team discovered you can customize ChatGPT by telling it how you want it to act.

If you can’t see the video click here

Another surprising insight from the team is that you can use ChatGPT to tell you how to write better prompts for itself.

If you can’t see the video click here
In summary, Team NavalX used Claude to translate texts from Mandarin, and found that ChatGPT was the best for writing tasks, Perplexity the best for research tasks, Claude the best for reading tasks, and notebook LM was the best for summarization.

Lessons Learned

  • Integrating AI into this class took a dedicated instructor with a mission to create a new way to teach using AI tools
  • The result was AI vastly enhanced and accelerated learning of all teams
    • It acted as a helpful collaborator
    • Fusing AI with stakeholders interviews was especially powerful
  • At the start of the class students were familiar with a few of these AI tools
    • By the end of the class they were fluent in many more of them
    • Most teams invented creative use cases
  • All Stanford classes we now teach – Hacking for Defense, Lean Launchpad, Entrepreneurship Inside Government – have AI integrated as part of the course
  • Next year’s AI tools will be substantively better

How the United States Gave Up Being a Science Superpower

US global dominance in science was no accident, but a product of a far-seeing partnership between public and private sectors to boost innovation and economic growth.

Since 20 January, US science has been upended by severe cutbacks from the administration of US President Donald Trump. A series of dramatic reductions in grants and budgets — including the US National Institutes of Health (NIH) slashing reimbursements of indirect research costs to universities from around 50% to 15% — and deep cuts to staffing at research agencies have sent shock waves throughout the academic community.

These cutbacks put the entire US research enterprise at risk. For more than eight decades, the United States has stood unrivalled as the world’s leader in scientific discovery and technological innovation. Collectively, US universities spin off more than 1,100 science-based start-up companies each year, leading to countless products that have saved and improved millions of lives, including heart and cancer drugs, and the mRNA-based vaccines that helped to bring the world out of the COVID-19 pandemic.

These breakthroughs were made possible mostly by a robust partnership between the US government and universities. This system emerged as an expedient wartime design to fund weapons research and development (R&D) in universities. It has fuelled US innovation, national security and economic growth.

But, today, this engine is being sabotaged in the Trump administration’s attempt to purge research programmes in areas it doesn’t support, such as climate change and diversity, equity and inclusion, and to rein in campus protests. But the broader cuts are also dismantling the very infrastructure that made the United States a scientific superpower. At best, US research is at risk from friendly fire; at worst, it’s political short-sightedness.

Researchers mustn’t be complacent. They must communicate the difference between eliminating ideologically objectionable programmes and undermining the entire research ecosystem. Here’s why the US research system is uniquely valuable, and what stands to be lost.

Unique innovation model

The backbone of US innovation is a close partnership between government, universities and industry. It is a well-calibrated ecosystem: federally funded research at universities drives scientific advancement, which in turn spins off technology, patents and companies. This system emerged in the wake of the Second World War, rooted in the vision of US presidential science adviser Vannevar Bush and a far-sighted Congress, which recognized that US economic and military strength hinge on investment in science (see ‘Two systems’).

Two Systems – How US and UK science diverged

When Winston Churchill became UK prime minister in 1940, he had at his side his science adviser, physicist Frederick Lindemann. The country’s wartime technical priorities focused on defence and intelligence — such as electronics-based weapons, radar-based air defence and plans for nuclear weapons. Their code-breaking organization at Bletchley Park, UK, was reading secret German messages using the earliest computers ever built.

Under Churchill, Lindemann influenced which projects received funding and which were sidelined. His top-down, centralized approach, with weapons development primarily in government research laboratories, shaped UK innovation during the Second World War — and led to its demise post-war.

Meanwhile, in the United States, Vannevar Bush, a former dean of engineering at the Massachusetts Institute of Technology (MIT) in Cambridge, became science adviser to US president Franklin Roosevelt in June 1940. Bush told him that war would be won or lost on the basis of advanced technology. He convinced Roosevelt that, although the army and navy should keep making conventional weapons (planes, ships, tanks), scientists could develop more-advanced weapons and deliver them faster. He argued that the only way that the scientists could be productive was if they worked in a university setting in civilian-run weapons laboratories run by academics. Roosevelt agreed to it.

In 1941, Bush convinced the president that academics should also be allowed to acquire and deploy weapons, which were manufactured in volume by US corporations. To manage this, Bush created the US Office of Scientific Research and Development. Each division was run by an academic hand-picked by Bush. And they were located in universities, including MIT, Harvard University, Johns Hopkins University, the California Institute of Technology, Columbia University and the University of Chicago.

Nearly 10,000 scientists, engineers, academics and their graduate students received draft deferments to work in these university labs. Their work led to developments in a wide range of technologies, including electronics, radar, rockets, napalm and the bazooka, penicillin and cures for malaria, as well as chemical and nuclear weapons.

The inflow of government money — US$9 billion (in 2025 dollars) between 1941 and 1945 — changed US universities, and the world. Before the war, academic research was funded mostly by non-profit organizations and industry. Now, US universities were getting more money than they had ever seen. They were full partners in wartime research, not just talent pools.

Wartime Britain had different constraints. First, England was being bombed daily and blockaded by submarines, so focusing on a smaller set of projects made sense. Second, the country was teetering on bankruptcy. It couldn’t afford the big investments that the United States made. Many areas of innovation — such as early computing and nuclear research — went underfunded. And when Churchill was voted out of office in 1945, with him went Lindemann and the coordination of UK science and engineering. Post-war austerity led to cuts to all government labs and curtailed innovation.

The differing economic realities of the United States and United Kingdom also shaped their innovation systems. The United States had an enormous industrial base, abundant capital and a large domestic market, which enabled large-scale investment in research and development. In the United Kingdom, key industries were nationalized, which reduced competition and slowed technological progress.

Although UK universities such as Cambridge and Oxford remained leaders in theoretical science, they struggled to commercialize their breakthroughs. For instance, pioneering work on computing at Bletchley Park didn’t turn into a thriving UK computing industry — unlike in the United States. Without government support, UK post-war innovation never took off.

Meanwhile, US universities and companies realized that the wartime government funding for research had been an amazing accelerator for science and engineering. Everyone agreed it should continue.

In 1950, Congress set up the US National Science Foundation to fund all basic science in the United States (except for life sciences, a role that the US National Institutes of Health would assume). The US Atomic Energy Commission spun off the Manhattan Project and the military took back advanced weapons development. In 1958, the US Defense Advanced Research Projects Agency and NASA would also form as federal research agencies. And decades of economic boom followed.

It need not have been this way. Before the Second World War, the United Kingdom led the world in many scientific domains, but its focus on centralized government laboratories rather than university partnerships stifled post-war commercialization. By contrast, the United States channelled wartime research funds into universities, enabling breakthroughs that were scaled up by private industry to drive the nation’s post-war economic boom. This partnership became the foundation of Silicon Valley and the aerospace, nuclear and biotechnology industries.

The US government remains the largest source of academic R&D funding globally — with a budget of US$201.9 billion for federal R&D in the financial year 2025. Out of this pot, more than two dozen research agencies direct grants to US universities, totalling $59.7 billion in 2023, with the NIH and the US National Science Foundation (NSF) receiving the most.

The agencies do this for a reason: they want professors at universities to do research for them. In exchange, the agencies get basic research from universities that moves science forward, or applied research that creates prototypes of potential products. By partnering with universities, the agencies get more value for money and quicker innovation than if they did all the research themselves.

This is because universities can leverage their investments from the government with other funds that they draw in. For example, in 2023, US universities received $27.7 billion from charitable donations, $6.2 billion in industrial collaborations, $6.7 billion from non-profit organizations, $5.4 billion from state and local government and $3.1 billion from other sources — boosting the $59.7 billion up to $108.8 billion (see ‘US research ecosystem’). This external money goes mostly to creating research labs and buildings that, as any campus visitor has seen, are often named after their donors.

Source: US Natl Center for Science and Engineering Statistics; US Congress; US Natl Venture Capital Assoc; AUTM; Small Business Administration

Thus, federal funding for science research in the United States is decentralized. It supports mostly curiosity-driven basic science, but also prizes innovation and commercial applicability. Academic freedom is valued and competition for grants is managed through peer review. Other nations, including China and those in Europe, tend to have more-centralized and bureaucratic approaches.

But what makes the US ecosystem so powerful is what then happens to the university research: it’s the engine for creating start-ups and jobs. In 2023, US universities licensed 3,000 patents, 3,200 copyrights and 1,600 other licences to technology start-ups and existing companies. Such firms spin off more than 1,100 science-based start-ups each year, which lead to countless products.

Since the 1980 Bayh–Dole Act, US universities have been able to retain ownership of inventions that were developed using federally funded research (see go.nature.com/4cesprf). Before this law, any patents resulting from government-funded research were owned by the government, so they often went unused.

Closing the loop, these technology start-ups also get a yearly $4-billion injection in seed-funding grants from the same government research agencies. Venture capital adds a whopping $171 billion to scale those investments.

It all adds up to a virtuous circle of discovery and innovation.

Facilities costs

A crucial but under-appreciated component of this US research ecosystem is the indirect-cost reimbursement system, which allows universities to maintain the facilities and administrative support necessary for cutting-edge research. Critics often misunderstand the function of these funds, assuming that universities can spend this money on other areas, such as diversity, equity and inclusion programmes. In reality, they fund essential infrastructure: laboratory space, compliance with safety regulations, data storage and administrative support that allows principal investigators to focus on science rather than paperwork. Without this support, universities cannot sustain world-class research.

Reimbursing universities for indirect costs began during the Second World War, and broke ground, just as the weapons development did. Unlike in a typical fixed-price contract, the government did not set requirements for university researchers to meet or specifications for them to design their research to. It asked them to do research and, if the research looked like it might solve a military problem, to build a prototype they could test. In return, the government paid the researchers for their direct and indirect research costs.

Two scientists demonstrate the Dr. Robert Van De Graf 1,500,000 volt generator.

Vannevar Bush (right) led the US Office of Scientific Research and Development during the Second World War.Credit: Bettmann/Getty

At first, the government reimbursed universities for indirect costs at a flat rate of 25% of direct costs. Unlike businesses, universities had no profit margin, so indirect-cost recovery was their only way to pay for and maintain their research infrastructure. By the end of the war, some universities had agreed on a 50% rate. The rate is applied to direct costs, so that a principal investigator will be able to spend two-thirds of a grant on direct research costs and the rest will go to the university for indirect costs. (A common misconception is that indirect-cost rates are a percentage of the total grant, for example a 50% rate meaning that half of the award goes to overheads.)

After the Second World War, the US Office of Naval Research (ONR) began negotiating indirect-cost rates with universities on the basis of actual institutional expenses. Universities had to justify their overhead costs (administration, facilities, utilities) to receive full reimbursement. The ONR formalized financial auditing processes to ensure that institutions reported indirect costs accurately. This led to the practice of negotiating indirect-cost rates, which is still used today.

Since then, the reimbursement process has been tweaked to prevent gaming the system, but has remained essentially the same. Universities negotiate their indirect-cost rates with either the US Department of Health and Human Services (HHS) or the ONR. Most research-intensive universities receive rates of 50–60% for on-campus research. Private foundations often have a lower rate (10–20%), but tend to have wider criteria for what can be considered a direct cost.

In 2017, the first Trump administration attempted to impose a 10% cap on indirect costs for NIH research. Some in the administration viewed such costs as a form of bureaucratic bloat and argued that research universities were profiting from inflated overhead rates.

Congress rejected this and later added language in the annual funding bill that essentially froze most rates at their 2017 levels. This provision is embodied in section 224 of the Consolidated Appropriations Act of 2024, which has been extended twice and is still in effect.

In February, however, the NIH slashed its indirect reimbursement rate to an arbitrary 15% (see go.nature.com/4cgsndz). That policy is currently being challenged in court.

If the policy is ultimately allowed to proceed, the consequences will be immediate. Billions of dollars of support for research universities will be gone. In anticipation, some research universities are already scaling back their budgets, halting lab expansions and reducing graduate-student funding. This will mean fewer start-ups being founded, with effects on products, services, jobs, taxes and exports.

Race for talent

The ripple effects of Trump’s cuts to US academia are spreading, and one area in which there will be immediate ramifications is the loss of scientific talent. The United States has historically been the top destination for international researchers, thanks to its well-funded universities, innovation-driven economy and opportunities for commercialization.

US-trained scientists — many of whom have historically stayed in the country to launch start-ups or contribute to corporate R&D — are being actively recruited by foreign institutions, particularly in China, which has ramped up its science investments. China has expanded its Thousand Talents Program, which offers substantial financial incentives to researchers willing to relocate. France and other European nations are beginning to design packages to attract top US researchers.

Erosion of the US scientific workforce will have long-term consequences for its ability to innovate. If the country dismantles its research infrastructure, future transformative breakthroughs — whether in quantum computing, cancer treatment, autonomy or artificial intelligence — will happen elsewhere. The United States runs the risk of becoming dependent on foreign scientific leadership for its own economic and national-security needs.

History suggests that, once a nation loses its research leadership, regaining it is difficult. The United Kingdom never reclaimed its pre-war dominance in technological innovation. If current trends continue, the same fate might await the United States.

University research is not merely an academic concern — it is an economic and strategic imperative. Policymakers must recognize that federal R&D investments are not costs but catalysts for growth, job creation and national security.

Policymakers need to reaffirm the United States’ commitment to scientific leadership. If the country fails to act now, the consequences will be felt for generations. The question is no longer whether the United States can afford to invest in research. It is whether it can afford not to.

How the U.S. Became A Science Superpower

Prior to WWII the U.S was a distant second in science and engineering. By the time the war was over, U.S. science and engineering had blown past the British, and led the world for 85 years.


It happened because two very different people were the science advisors to their nation’s leaders. Each had radically different views on how to use their country’s resources to build advanced weapon systems. Post war, it meant Britain’s early lead was ephemeral while the U.S. built the foundation for a science and technology innovation ecosystem that led the world – until now.

The British – Military Weapons Labs
When Winston Churchill became the British prime minister in 1940, he had at his side his science advisor, Professor Frederick Lindemann, his friend for 20 years. Lindemann headed up the physics department at Oxford and was the director of the Oxford Clarendon Laboratory. Already at war with Germany, Britain’s wartime priorities focused on defense and intelligence technology projects, e.g. weapons that used electronics, radar, physics, etc. – a radar-based air defense network called Chain Home, airborne radar on night fighters, and plans for a nuclear weapons program – the MAUD Committee which started the British nuclear weapons program code-named Tube Alloys. And their codebreaking organization at Bletchley Park was starting to read secret German messages – the Enigma – using the earliest computers ever built.

As early as the mid 1930s, the British, fearing Nazi Germany, developed prototypes of these weapons using their existing military and government research labs. The Telecommunications Research Establishment built early-warning Radar, critical to Britain’s survival during the Battle of Britain, and electronic warfare to protect British bombers over Germany. The Admiralty Research Lab built Sonar and anti-submarine warfare systems. The Royal Aircraft Establishment was developing jet fighters. The labs then contracted with British companies to manufacture the weapons in volume. British government labs viewed their universities as a source of talent, but they had no role in weapons development.

Under Churchill, Professor Lindemann influenced which projects received funding and which were sidelined. Lindemann’s WWI experience as a researcher and test pilot on the staff of the Royal Aircraft Factory at Farnborough gave him confidence in the competence of British military research and development labs. His top-down, centralized approach with weapons development primarily in government research labs shaped British innovation during WW II – and led to its demise post-war.

The Americans – University Weapons Labs
Unlike Britain, the U.S. lacked a science advisor. It wasn’t until June 1940, that Vannevar Bush, ex-MIT dean of engineering, and President of the Carnegie Institute told President Franklin Roosevelt that World War II would be the first war won or lost on the basis of advanced technology electronics, radar, physics problems, etc.

Unlike Lindemann, Bush had a 20-year-long contentious history with the U.S. Navy and a dim view of government-led R&D. Bush contended that the government research labs were slow and second rate. He convinced the President that while the Army and Navy ought to be in charge of making conventional weapons – planes, ships, tanks, etc. — scientists from academia could develop better advanced technology weapons and deliver them faster than Army and Navy research labs. And he argued the only way the scientists could be productive was if they worked in a university setting in civilian-run weapons labs run by university professors.

To the surprise of the Army and Navy Service chiefs, Roosevelt agreed to let Bush build exactly that organization to coordinate and fund all advanced weapons research.

(While Bush had no prior relationship with the President, Roosevelt had been the Assistant Secretary of the Navy during World War I and like Bush had seen first-hand its dysfunction. Over the next four years they worked well together. Unlike Churchill, Roosevelt had little interest in science and accepted Bush’s opinions on the direction of U.S. technology programs, giving Bush sweeping authority.)

In 1941, Bush upped the game by convincing the President that in addition to research, development, acquisition and deployment of these weapons also ought to be done by professors in universities. There they would be tasked to develop military weapons systems and solve military problems to defeat Germany and Japan. (The weapons were then manufactured in volume by U.S. corporations Western Electric, GE, RCA, Dupont, Monsanto, Kodak, Zenith, Westinghouse, Remington Rand and Sylvania.) To do this Bush created the Office of Scientific Research and Development (OSR&D).

OSR&D headquarters divided the wartime work into 19 “divisions,” 5 “committees,” and 2 “panels,” each solving a unique part of the military war effort. There were no formal requirements.

Staff at OSRD worked with their military liaisons to understand what the most important military problems were and then each OSR&D division came up with solutions. These efforts spanned an enormous range of tasks – the development of advanced electronics, radar, rockets, sonar, new weapons like the proximity fuse, Napalm, the Bazooka and new drugs such as penicillin, cures for malaria, chemical warfare, and nuclear weapons.

Each division was run by a professor hand-picked by Bush. And they were located in universities –  MIT, Harvard, Johns Hopkins, Caltech, Columbia and the University of Chicago all ran major weapons systems programs. Nearly 10,000 scientists and engineers, professors and their grad students received draft deferments to work in these university labs.

(Prior to World War 2, science in U.S. universities was primarily funded by companies interested in specific research projects. But funding for basic research came from two non-profits: The Rockefeller Foundation and the Carnegie Institution. In his role  as President of the Carnegie Institution Bush got to know (and fund!) every top university scientist in the U.S.  As head of Physics at Oxford, Lindemann viewed other academics as competitors.)

Americans – Unlimited Dollars
What changed U.S. universities, and the world forever, was government money. Lots of it. Prior to WWII most advanced technology research in the U.S. was done in corporate innovation labs (GE, AT&T, Dupont, RCA, Westinghouse, NCR, Monsanto, Kodak, IBM, et al.) Universities had no government funding (except for agriculture) for research. Academic research had been funded by non-profits, mostly the Rockefeller and Carnegie foundations and industry. Now, for the first time, U.S. universities were getting more money than they had ever seen. Between 1941 and 1945, OSR&D gave $9 billion (in 2025 dollars) to the top U.S. research universities. This made universities full partners in wartime research, not just talent pools for government projects as was the case in Britain.

The British – Wartime Constraints
Wartime Britain had very different constraints. First, England was under daily attack. They were being bombed by air and blockaded by submarines, so it was logical that they focused on a smaller set of high-priority projects to counter these threats. Second, the country was teetering on bankruptcy. It couldn’t afford the broad and deep investments that the U.S. made. (Illustrated by their abandonment of their nuclear weapons programs when they realized how much it would cost to turn the research into industrial scale engineering.) This meant that many other areas of innovation—such as early computing and nuclear research—were underfunded compared to their American counterparts.

Post War – Britain
Churchill was voted out of office in 1945. With him went Professor Lindemann and the coordination of British science and engineering. Britain would be without a science advisor until 1951-55 when Churchill returned for a second term and brought back Lindemann with him.

The end of the war led to extreme downsizing of the British military including severe cuts to all the government labs that had developed Radar, electronics, computing, etc.

With post-war Britain financially exhausted, post-war austerity limited its ability to invest in large-scale innovation. There were no post-war plans for government follow-on investments. The differing economic realities of the U.S. and Britain also played a key role in shaping their innovation systems. The United States had an enormous industrial base, abundant capital, and a large domestic market, which enabled large-scale investment in research and development. In Britain, a socialist government came to power. Churchill’s successor, Labor’s Clement Attlee, dissolved the British empire, nationalized banking, power and light, transport, and iron and steel, all which reduced competition and slowed technological progress.

While British research institutions like Cambridge and Oxford remained leaders in theoretical science, they struggled to scale and commercialize their breakthroughs. For instance Alan Turing’s and Tommy Flower’s pioneering work on computing at Bletchley Park didn’t turn into a thriving British computing industry—unlike in the U.S., where companies like ERA, Univac, NCR and IBM built on their wartime work.

Without the same level of government support for dual-use technologies or commercialization, and with private capital absent for new businesses, Britain’s post-war innovation ecosystem never took off.

Post War – The U.S.
Meanwhile in the U.S. universities and companies realized that the wartime government funding for research had been an amazing accelerator for science, engineering, and medicine. Everyone, including Congress, agreed that the U.S. government should continue to play a large role in continuing it. In 1945, Vannevar Bush published a report “Science, The Endless Frontier” advocating for government funding of basic research in universities, colleges, and research institutes. Congress argued on how to best organize federal support of science.

By the end of the war, OSR&D funding had taken technologies that had been just research papers or considered impossible to build at scale and made them commercially viable – computers, rockets, radar, Teflon, synthetic fibers, nuclear power, etc. Innovation clusters formed around universities like MIT and Harvard which had received large amounts of OSR&D funding (MIT’s Radiation Lab or “Rad Lab” employed 3,500 civilians during WWII and developed and built 100 radar systems deployed in theater,) or around professors who ran one of the OSR&D divisions – like Fred Terman at Stanford.

When the war ended, the Atomic Energy Commission spun out of the Manhattan Project in 1946 and the military services took back advanced weapons development. In 1950 Congress set up the National Science Foundation to fund all basic science in the U.S. (except for Life Sciences, a role the new National Institutes of Health would assume.) Eight years later DARPA and NASA would also form as federal research agencies.

Ironically, Vannevar Bush’s influence would decline even faster than Professor Lindemann’s. When President Roosevelt died in April 1945 and Secretary of War Stimson retired in September 1945, all the knives came out from the military leadership Bush had bypassed in the war. His arguments on how to reorganize OSR&D made more enemies in Congress. By 1948 Bush had retired from government service. He would never again play a role in the U.S. government.

Divergent Legacies
Britain’s focused, centralized model using government research labs was created in a struggle for short-term survival. They achieved brilliant breakthroughs but lacked the scale, integration and capital needed to dominate in the post-war world.

The U.S. built a decentralized, collaborative ecosystem, one that tightly integrated massive government funding of universities for research and prototypes while private industry built the solutions in volume.

A key component of this U.S. research ecosystem was the genius of the indirect cost reimbursement system. Not only did the U.S. fund researchers in universities by paying the cost of their salaries, the U.S. gave universities money for the researchers facilities and administration. This was the secret sauce that allowed U.S. universities to build world-class labs for cutting-edge research that were the envy of the world. Scientists flocked to the U.S. causing other countries to complain of a “brain drain.”

Today, U.S. universities license 3,000 patents, 3,200 copyrights and 1,600 other licenses to technology startups and existing companies. Collectively, they spin out over 1,100 science-based startups each year, which lead to countless products and tens of thousands of new jobs. This university/government ecosystem became the blueprint for modern innovation ecosystems for other countries.

Summary
By the end of the war, the U.S. and British innovation systems had produced radically different outcomes. Both systems were influenced by the experience and personalities of their nations science advisor.

  • Britain remained a leader in theoretical science and defense technology, but its socialist government economic policies led to its failure to commercialize wartime innovations.
  • The U.S. emerged as the global leader in science and technology, with innovations like electronics, microwaves, computing, and nuclear power driving its post-war economic boom.
  • The university-industry-government partnership became the foundation of Silicon Valley, the aerospace sector, and the biotechnology industry.
  • Today, China’s leadership has spent the last three decades investing heavily to surpass the U.S. in science and technology.
  • In 2025, with the abandonment of U.S. government support for university research, the long run of U.S. dominance in science may be over. Others will lead.

Quantum Computing – An Update

In March 2022 I wrote a description of the Quantum Technology Ecosystem. I thought this would be a good time to check in on the progress of building a quantum computer and explain more of the basics.

Just as a reminder, Quantum technologies are used in three very different and distinct markets: Quantum Computing, Quantum Communications and Quantum Sensing and Metrology. If you don’t know the difference between a qubit and cueball, (I didn’t) read the tutorial here.

Summary –

  • There’s been incremental technical progress in making physical qubits
  • There is no clear winner yet between the seven approaches in building qubits
  • Reminder – why build a quantum computer?
  • How many physical qubits do you need?
  • Advances in materials science will drive down error rates
  • Regional research consortiums
  • Venture capital investment FOMO and financial engineering

We talk a lot about qubits in this post. As a reminder a qubit – is short for a quantum bit. It is a quantum computing element that leverages the principle of superposition (that quantum particles can exist in many possible states at the same time) to encode information via one of four methods: spin, trapped atoms and ions, photons, or superconducting circuits.

Incremental Technical Progress
As of 2024 there are seven different approaches being explored to build physical qubits for a quantum computer. The most mature currently are Superconducting, Photonics, Cold Atoms, Trapped Ions. Other approaches include Quantum Dots, Nitrogen Vacancy in Diamond Centers, and Topological.  All these approaches have incrementally increased the number of physical qubits.

These multiple approaches are being tried, as there is no consensus to the best path to building logical qubits. Each company believes that their technology approach will lead them to a path to scale to a working quantum computer.

Every company currently hypes the number of physical qubits they have working. By itself this is a meaningless number to indicate progress to a working quantum computer. What matters is the number of logical qubits.

Reminder – Why Build a Quantum Computer?
One of the key misunderstandings about quantum computers is that they are faster than current classical computers on all applications. That’s wrong. They are not. They are faster on a small set of specialized algorithms. These special algorithms are what make quantum computers potentially valuable. For example, running Grover’s algorithm on a quantum computer can search unstructured data faster than a classical computer. Further, quantum computers are theoretically very good at minimization / optimizations /simulations…think optimizing complex supply chains, energy states to form complex molecules, financial models (looking at you hedge funds,) etc.

It’s possible that quantum computers will be treated as “accelerators” to the overall compute workflows – much like GPUs today. In addition, several companies are betting that “algorithmic” qubits (better than “noisy” but worse than “error-corrected”) may be sufficient to provide some incremental performance to workflows lie simulating physical systems. This potentially opens the door for earlier cases of quantum advantage.

However, while all of these algorithms might have commercial potential one day, no one has yet to come up with a use for them that would radically transform any business or military application. Except for one – and that one keeps people awake at night. It’s Shor’s algorithm for integer factorization – an algorithm that underlies much of existing public cryptography systems.

The security of today’s public key cryptography systems rests on the assumption that breaking into those keys with a thousand or more digits is practically impossible. It requires factoring large prime numbers (e.g., RSA) or elliptic curve (e.g., ECDSA, ECDH) or finite fields (DSA) that can’t be done with any type of classic computer regardless of how large. Shor’s factorization algorithm can crack these codes if run on a Quantum Computer. This is why NIST has been encouraging the move to Post-Quantum / Quantum-Resistant Codes.

How many physical qubits do you need for one logical qubit?
Thousands of logical qubits are needed to create a quantum computer that can run these specialized applications. Each logical qubit is constructed out of many physical qubits. The question is, how many physical qubits are needed? Herein lies the problem.

Unlike traditional transistors in a microprocessor that once manufactured always work, qubits are unstable and fragile. They can pop out of a quantum state due to noise, decoherence (when a qubit interacts with the environment,) crosstalk (when a qubit interacts with a physically adjacent qubit,) and imperfections in the materials making up the quantum gates. When that happens errors will occur in quantum calculations. So to correct for those error you need lots of physical qubits to make one logical qubit.

So how do you figure out how many physical qubits you need?

You start with the algorithm you intend to run.

Different quantum algorithms require different numbers of qubits. Some algorithms (e.g., Shor’s prime factoring algorithm) may need >5,000  logical qubits (the number may turn out to be smaller as researchers think of how to use fewer logical qubits to implement the algorithm.)

Other algorithms (e.g., Grover’s algorithm) require fewer logical qubits for trivial demos but need 1000’s of logical qubits to see an advantage over linear search running on a classical computer. (See here, here and here for other quantum algorithms.)

Measure the physical qubit error rate.

Therefore, the number of physical qubits you need to make a single logical qubit starts by calculating the physical qubit error rate (gate error rates, coherence times, etc.) Different technical approaches (superconducting, photonics, cold atoms, etc.) have different error rates and causes of errors unique to the underlying technology.

Current state-of-the-art quantum qubits have error rates that are typically in the range of 1% to 0.1%. This means that on average one out of every 100 to one out of 1000 quantum gate operations will result in an error. System performance is limited by the worst 10% of the qubits.

Choose a quantum error correction code

To recover from the error prone physical qubits, quantum error correction encodes the quantum information into a larger set of physical qubits that are resilient to errors. Surface Codes is the most commonly proposed error correction code. A practical surface code uses hundreds of physical qubits to create a logical qubit.  Quantum error correction codes get more efficient the lower the error rates of the physical qubits. When errors rise above a certain threshold, error correction fails, and the logical qubit becomes as error prone as the physical qubits.

The Math

To factor a 2048-bit number using Shor’s algorithm with a 10-2 (1% per physical qubit) error rate:

  • Assume we need ~5,000 logical qubits
  • With an error rate of 1% the surface error correction code requires ~ 500 physical qubits required to encode one logical qubit. (The number of physical qubits required to encode one logical qubit using the Surface Code depends on the error rate.)
  • Physical cubits needed for Shor’s algorithm= 500 x 5,000 = 2.5 million

If you could reduce the error rate by a factor of 10 – to 10-3 (0.1% per physical qubit,)

  • Because of the lower error rate, the surface code would only need ~ 100 physical qubits to encode one logical qubit
  • Physical cubits needed for Shor’s algorithm= 100 x 5,000 = 500 thousand

In reality there another 10% or so of ancillary physical bits needed for overhead. And no one yet knows the error rate in wiring multiple logical bits together via optical links or other technologies.

(One caveat to the math above. It assumes that every technical approach (Superconducting, Photonics, Cold Atoms, Trapped Ions, et al) will require each physical qubit to have hundreds of bits of error correction to make a logical qubit. There is always a chance a breakthrough could create physical qubits that are inherently stable, and the number of error correction qubits needed drops substantially. If that happens, the math changes dramatically for the better and quantum computing becomes much closer.)

Today, the best anyone has done is to create 1,000 physical qubits.

We have a ways to go.

Advances in materials science will drive down error rates
As seen by the math above, regardless of the technology in creating physical qubits (Superconducting, Photonics, Cold Atoms, Trapped Ions, et al.) reducing errors in qubits can have a dramatic effect on how quickly a quantum computer can be built. The lower the physical qubit error rate, the fewer physical qubits needed in each logical qubit.

The key to this is materials engineering. To make a system of 100s of thousands of qubits work the qubits need to be uniform and reproducible. For example, decoherence errors are caused by defects in the materials used to make the qubits. For superconducting qubits that requires uniform thickness, controlled grain size, and roughness. Other technologies require low loss, and uniformity. All of the approaches to building a quantum computer require engineering exotic materials at the atomic level – resonators using tantalum on silicon, Josephson junctions built out of magnesium diboride, transition-edge sensors, Superconducting Nanowire Single Photon Detectors, etc.

Materials engineering is also critical in packaging these qubits (whether it’s superconducting or conventional packaging) and to interconnect 100s of thousands of qubits, potentially with optical links. Today, most of the qubits being made are on legacy 200mm or older technology in hand-crafted processes. To produce qubits at scale, modern 300mm semiconductor technology and equipment will be required to create better defined structures, clean interfaces, and well-defined materials. There is an opportunity to engineer and build better fidelity qubits with the most advanced semiconductor fabrication systems so the path from R&D to high volume manufacturing is fast and seamless.

There are likely only a handful of companies on the planet that can fabricate these qubits at scale.

Regional research consortiums
Two U.S. states; Illinois and Colorado are vying to be the center of advanced quantum research.

Illinois Quantum and Microelectronics Park (IQMP)
Illinois has announced the Illinois Quantum and Microelectronics Park initiative, in collaboration with DARPA’s Quantum Proving Ground (QPG) program, to establish a national hub for quantum technologies. The State approved $500M for a “Quantum Campus” and has received $140M+ from DARPA with the state of Illinois matching those dollars.

Elevate Quantum
Elevate Quantum is the quantum tech hub for Colorado, New Mexico, and Wyoming. The consortium was awarded $127m from the Federal and State Governments – $40.5 million from the Economic Development Administration (part of the Department of Commerce) and $77m from the State of Colorado and $10m from the State of New Mexico.

(The U.S. has a National Quantum Initiative (NQI) to coordinate quantum activities across the entire government see here.)

Venture capital investment, FOMO, and financial engineering
Venture capital has poured billions of dollars into quantum computing, quantum sensors, quantum networking and quantum tools companies.

However, regardless of the amount of money raised, corporate hype, pr spin, press releases, public offerings, no company is remotely close to having a quantum computer or even being close to run any commercial application substantively faster than on a classical computer.

So why all the investment in this area?

  1. FOMO – Fear Of Missing Out. Quantum is a hot topic. This U.S. government has declared quantum of national interest. If you’re a deep tech investor and you don’t have one of these companies in your portfolio it looks like you’re out of step.
  2. It’s confusing. The possible technical approaches to creating a quantum computer – Superconducting, Photonics, Cold Atoms, Trapped Ions, Quantum Dots, Nitrogen Vacancy in Diamond Centers, and Topological – create a swarm of confusing claims. And unless you or your staff are well versed in the area, it’s easy to fall prey to the company with the best slide deck.
  3. Financial engineering. Outsiders confuse a successful venture investment with companies that generate lots of revenue and profit. That’s not always true.

Often, companies in a “hot space” (like quantum) can go public and sell shares to retail investors who have almost no knowledge of the space other than the buzzword. If the stock price can stay high for 6 months the investors can sell their shares and make a pile of money regardless of what happens to the company.

The track record so far of quantum companies who have gone public is pretty dismal. Two of them are on the verge of being delisted.

Here are some simple questions to ask companies building quantum computers:

  • What is their current error rates?
  • What error correction code will they use?
  • Given their current error rates, how many physical qubits are needed to build one logical qubit?
  • How will they build and interconnect the number of physical qubits at scale?
  • What number of qubits do they think is need to run Shor’s algorithm to factor 2048 bits.
  • How will the computer be programmed? What are the software complexities?
  • What are the physical specs – unique hardware needed (dilution cryostats, et al) power required, connectivity, etc.

Lessons Learned

  • Lots of companies
  • Lots of investment
  • Great engineering occurring
  • Improvements in quantum algorithms may add as much (or more) to quantum computing performance as hardware improvements
  • The winners will be the one who master material engineering and interconnects
  • Jury is still out on all bets

Update: the kind folks at Applied Materials pointed me to the original 2012  Surface Codes paper. They pointed out that the math should look more like:

  • To factor a 2048-bit number using Shor’s algorithm with a 0.3% error rate (Google’s current quantum processor error rate)
  • Assume we need ~ 2,000 (not 5,000) logical qubits to run Shor’s algorithm.
  • With an error rate of 0.3% the surface error correction code requires ~ 10 thousand physical qubits to encode one logical qubit to achieve 10^-10 logical qubit error rate.
  • Physical cubits needed for Shor’s algorithm= 10,000 x 2,000 = 20 million

Still pretty far away from the 1,000 qubits we currently can achieve.

For those so inclined
The logical qubit error rate P_L  is  P_L = 0.03 (p/p_th)^((d+1)/2), where p_th ~ 0.6% is the error rate threshold for surface codes, p the physical qubit error rate, and d is the size of the code, which is related to the number of the physical qubits: N = (2d – 1)^2.

See the  plot below for P_L versus N for different physical qubit error rate for reference.

How Saboteurs Threaten Innovation–and What to Do About It

This article first appeared in First Round Review.

“Only the Paranoid Survive”
Andy Grove – Intel CEO 1987-1998

I just had an urgent “can we meet today?” coffee with Rohan, an ex-student. His three-year-old startup had been slapped with a notice of patent infringement from a Fortune 500 company. “My lawyers said defending this suit could cost $500,000 just for discovery, and potentially millions of dollars if it goes to trial. Do you have any ideas?”

The same day, I got a text from Jared, a friend who’s running a disruptive innovation organization inside the Department of Defense. He just learned that their incumbent R&D organization has convinced leadership they don’t need any outside help from startups or scaleups.

Sigh….

Rohan and Jared have learned three valuable lessons:

  • Only the paranoid survive (as Andy Grove put it)
  • If you’re not losing sleep over who wants to kill you, you’re going to die.
  • The best fight is the one you can avoid.

It’s a reminder that innovators need to be better prepared about all the possible ways incumbents sabotage innovation.

Innovators often assume that their organizations and industry will welcome new ideas, operating concepts and new companies. Unfortunately, the world does not unfold like business school textbooks.

Whether you’re a new entrant taking on an established competitor or you’re trying to stay scrappy while operating within a bigger company here’s what you need to know about how incumbents will try to stand in your way – and what you can do about it.


Entrepreneurs versus Saboteurs
Startups and scaleups outside of companies or government agencies want to take share of an existing market, or displace existing vendors. Or if they have a disruptive technology or business model, they want to create a new capability or operating concept – even creating a new market.

As my student Rohan just painfully learned, the incumbent suppliers and existing contractors want to kill these new entrants. They have no intention of giving up revenue, profits and jobs. (In the government, additional saboteurs can include Congressional staffers, Congressman and lobbyists, as these new entrants threaten campaign contributions and jobs in local districts.)

Intrapreneurs versus Saboteurs
Innovators inside of companies or government agencies want to make their existing organization better, faster, more effective, more profitable, more responsive to competitive threats or to adversaries. They might be creating or advocating for a better version of something that exists. Or perhaps they are trying to create something disruptive that never existed before.

Inside these commercial or government organizations there are people who want to kill innovation (as my friend Jared just discovered). These can be managers of existing programs, or heads of engineering or R&D organizations who are feeling threatened by potential loss of budget and authority. Most often, budgets and headcount are zero-sum games so new initiatives threaten the status quo.

Leaders of existing organizations often focus on the success of their department or program rather than the overall good of the organization. And at times there are perverse incentives as some individuals are aligned with the interests of incumbent vendors rather than the overall good of the company or government agency.

How Do incumbents Kill Innovation?
Rohan and Jared were each dealing with one form of innovation sabotage. Incumbents use a variety of ways to sabotage and kill innovative ideas inside of organizations and outside new companies. And most of the time innovators have no idea what just hit them. And those that do – like Rohan and Jared – have no game plan in place to respond.

Here are the most common methods of sabotage that I’ve seen, followed by a few suggestions on how to prepare and defend against them.

Founders and Innovators should expect that existing organizations and companies will defend their turf – ferociously.

 

Common ways incumbents kill innovation in both commercial markets and government agencies.

  • Create career FUD (fear, uncertainty and doubt). Positioning the innovative idea, product or service as risk to the career of whoever adopts or champions it.
  • Emphasize the risk to existing legacy investments, like the cost of switching to the new product or service or highlighting the users who would object to it.
  • Claim that an existing R&D or engineering organization is already doing it (0r can do it better/cheaper.)
  • Create innovation theater by starting internal innovation programs with the existing staff and processes.
  • Set up committees and advisory boards to “study” the problem. Appoint well respected members of the status quo.
  • Poison funding for internal initiatives. Claiming that you’ll have to kill important program x or y to pay for the new initiative. Or funding the demo of the new idea and then “slow-walk” the budget for scale.
  • File Lawsuits/Protests against winners of contracts.
  • Use patents as a weapon. Filing patent infringement lawsuits – whether true or not. Try to invalidate existing patents – whether true or not.
  • Claim that employees have stolen IP from their previous employer.
  • File HR Complaints against internal intrapreneurs for cutting corners or breaking rules.
  • Isolate senior leadership from the innovators inside the organization via reporting hierarchy and controlling information about alternatives.
  • Object to structures and processes for the rapid adoption of new technologies. Treat innovation and execution as the same processes. Lack tolerance for failure at innovation. Do not cultivate a culture of urgency. Don’t offer a a structured career path for innovators.
  • Lock-up critical resources, like materials, components, people, law firms, distribution channels, partners and make them unavailable to innovation groups/startups.
  • Control industry/government standards to ensure that they are lock-in’s for incumbents.
  • Acquire a startup and shut it down or bury its product
  • Poach talent from an innovation organization or company by convincing talent that the innovation effort won’t go anywhere.
  • Influence “independent” analysts, market research firms with “research” contracts to prove that the market is too small.
  • Confuse buyers and senior leadership by preannouncing products or products that never ship – vaporware.
  • Bundle products (Microsoft Office)
  • Long term lock-in contracts for commercial customers or sole-source for government programs (e.g. F-35).

How incumbents kill startups in government markets

  • File contract appeals or protests, creating delays that burn cash for new entrants.
  • File Inspector General (IG) complaints, claiming innovators are cutting corners, breaking rules or engaging in illegal hiring and spending. If possible, capture these IG offices and weaponize them against innovators.
  • Hijack the acquisition system by creating requirements written for incumbents, while setting unnecessary standards, barriers and paperwork for new entrants. Ignore requirements to investigate alternate suppliers and issue contracts to the incumbents.
  • Revolving door. The implicit promise of jobs to government program executives and managers and the implicit promise of jobs to congressional staffers and congressmen.
  • Lobbying. Incumbents have dedicated staffs to shape requirements and budgets for their products, as well as dedicated staff for continual facetime in Washington. They are experts at managing the POM, PPBE, House and Senate Armed Services Committees  and appropriations committees.
  • Create career risks for innovators attempting to gain support outside of official government channels, penalizing unofficial contacts with members of Congress or their staffs.
  • Create Proprietary interfaces
  • Weaponize security clearances, delaying or denying access to needed secure information, or even pulling your, or your company’s clearance.

How incumbents kill startups in commercial markets.

  • Rent Seeking via regulatory bodies (e.g. FCCSECFTC, FAA, Public Utility, Taxi/Insurance Commissions, School Boards, etc, …) Use government regulation to keep out new entrants who have more innovative business models (or delay them so the incumbents can catch up).
  • Rent Seeking via local, state and federal laws (e.g. occupational licensing, car dealership laws, grants, subsidies, or tariff protection). Use arguments – from public safety, to lack of quality, or loss of jobs –  to lobby against the new entrants.
  • Rent Seeking via courts to tie up and exhaust a startup’s limited financial resources.
  • Rent Seeking via proprietary interfaces (e.g. John Deere tractor interfaces…)
  • Poison startup financing sources. Telling VCs the incumbents already own the market. Tell Government funders the company is out of cash.
  • Legal kickbacks, like discounts, SPIFs, Co-advertising (e.g. Intel and Microsoft for x86 processors/Windows).
  • State Attorney General complaints to tie up startup resources
  • Create fake benchmark groups or greenwash groups to prove existing solution is better or that new solution is worse.

Innovators Survival Checklist

There is no magic bullet I could have offered Rohan or Jared to defend against every possible move an incumbent might make. However, if they had realized that incumbents wouldn’t welcome them, they (and you) might have considered the suggestions below on how to prepare for innovation saboteurs.

In both government and commercial markets:

  • Map the order of battle. Understand how the money flows and who controls budget, headcount and organizational design. Understand who has political, regulator, leadership influence and where they operate.
  • Understand saboteurs and their motivation. Co-opt them. Turn them into advocates – (this works with skeptics). Isolate them – with facts. Get them removed from their job (preferably by promoting them to another area.)
  • Build an insurgent team. A technologist, visionary, champion, allies, proxies, etc. The insurgency grows over time.
  • Avoid publicly belittling incumbents. Do not say, “They don’t get it.” That will embarrass, infuriate and ultimately motivate them to put you out of business.
  • Avoid early slideware. Instead focus on delivering successful minimal viable products which demonstrate feasibility and a validated requirement.
  • Build evidence of your technical, managerial and operational excellence. Build Minimal Viable Products (MVPs) that illustrate that you understand a customer or stakeholders problem, have the resources to solve it, and a path to deployment.
  • If possible, communicate and differentiate your innovation as incremental innovation. Point out that over time it’s disruptive.
  • Go after rapid scale of a passionate customer who values the disruption e.g. INDOPACOM; or Uber and Airbnb, Tesla in the commercial world
  • Ally with larger partners who see you as a way to break the incumbents’ lock on the market. i.e. Palantir and the intelligence agencies versus the Army and in industry, IBM’s i2, / Textron Systems Overwatch.

In commercial markets:

  • Figure out an “under the radar” strategy that doesn’t attract incumbents’ lawsuits, regulations or laws when you have limited resources to fight back.
  • Patent strategy. Build a defensive patent portfolio and strategy? And consider an offensive one, buying patents you think incumbents may infringe.
  • Pick early markets where the rent seekers are weakest and scale. For example, pick target markets with no national or state lobbying influence. i.e. Craigslist versus newspapers, Netflix versus video rental chains, Amazon versus bookstores, etc.
  • When you get scale and raise a large financing round, take the battle to the incumbents. Strategies at this stage include hiring your own lobbyists, or working with peers in your industry to build your own influence and political action groups.

Jared is still trying to get senior leadership to understand that the clock is ticking, and internal R&D efforts and current budget allocation won’t be sufficient or timely. He’s building a larger coalition for change, but the inertia for the status quo is overwhelming.

Rohan’s company was lucky. After months of scrambling (and tens of thousands of dollars), they ended up buying a patent portfolio from a defunct startup and were able to use it to convince the Fortune 500 company to drop their lawsuit.

I hope they both succeed.

What have you found to be effective in taking on incumbents?

What Does Product Market Fit Sound Like? This.

I got a call from an ex-student asking me “how do you know when you found product market fit?”

There’s been lots of words written about it, but no actual recordings of the moment.

I remembered I had saved this 90 second, 26 year-old audio file because this is when I knew we had found it at Epiphany.

The speaker was the the Chief Financial Officer of a company called Visio, subsequently acquired by Microsoft

I played it for her and I think it provided some clarity.

It’s worth a listen.

If you can’t hear the audio click here

How To Find Your Customer In the Dept of Defense – The Directory of DoD Program Executive Offices

Finding a customer for your product in the Department of Defense is hard: Who should you talk to? How do you get their attention?

Looking for DoD customers

How do you know if they have money to spend on your product?

It almost always starts with a Program Executive Office.


The Department of Defense (DoD) no longer owns all the technologies, products and services to deter or win a war – e.g.  AI, autonomy, drones, biotech, access to space, cyber, semiconductors, new materials, etc.

Today, a new class of startups are attempting to sell these products to the Defense Department. Amazingly, there is no single DoD-wide phone book available to startups of who to call in the Defense Department.

So I wrote one.

Think of the PEO Directory linked below as a “Who buys in the government?” phone book.

The DoD buys hundreds of billions of dollars of products and services per year, and nearly all of these purchases are managed by Program Executive Offices. A Program Executive Office may be responsible for a specific program (e.g., the Joint Strike Fighter) or for an entire portfolio of similar programs (e.g., the Navy Program Executive Office for Digital and Enterprise Services). PEOs define requirements and their Contracting Officers buy things (handling the formal purchasing, issuing requests for proposals (RFPs), and signing contracts with vendors.) Program Managers (PMs) work with the PEO and manage subsets of the larger program.

Existing defense contractors know who these organizations are and have teams of people tracking budgets and contracts. But startups?  Most startups don’t have a clue where to start.

This is a classic case of information asymmetry and it’s not healthy for the Department of Defense or the nascent startup defense ecosystem.

That’s why I put this PEO Directory together.

This first version of the directory lists 75 Program Executive Offices and their Program Executive Officers and Program/Project Managers.

Each Program Executive Office is headed by a Program Executive Officer who is a high ranking official – either a member of the military or a high ranking civilian – responsible for the cost, schedule, and performance of a major system, or portfolio of systems, some worth billions of dollars.

Below is a summary of 75 Program Executive Offices in the Department of Defense.

You can download the full 64-page document of Program Executive Offices and Officers with all 602 names here.

Caveats
Do not depend on this document for accuracy or completeness.
It is likely incomplete and contains errors.
Military officers typically change jobs every few years.
Program Offices get closed and new ones opened as needed.

This means this document was out of date the day it was written. Still it represents an invaluable starting point for startups looking to work with DoD.

How to Use The PEO Directory As Part of A Go-To-Market Strategy
While it’s helpful to know what Program Executive Offices exist and who staffs them, it’s even better to know where the money is, what it’s being spent on, and whether the budget is increasing, decreasing, or remaining the same.

The best place to start is by looking through an overview of the entire defense budget here. Then search for those programs in the linked PEO Directory. You can get an idea whether that program has $ Billions, or $ Millions.

Next, take a look at the budget documents released by the DoD Comptroller –
particularly the P-1 (Procurement) and R-1 (R&D) budget documents.

Combining the budget document with this PEO directory helps you narrow down which of the 75 Program Executive Offices and 500+ program managers to call on.

With some practice you can translate the topline, account, or Program Element (PE) Line changes into a sales Go-To-Market strategy, or at least a hypothesis of who to call on.

Armed with the program description (it’s full of jargon and 9-12 months out of date) and the Excel download here and the Appendix here –– you can identify targets for sales calls with DoD where your product has the best chance of fitting in.

The people and organizations in this list change more frequently than the money.

Knowing the people is helpful only after you understand their priorities — and money is the best proxy for that.

Future Work
Ultimately we want to give startups not only who to call on, and who has the money, but which Program Offices are receptive to new entrants. And which have converted to portfolio management, which have tried OTA contracts, as well as highlighting those who are doing something novel with metrics or outcomes.

Going forward this project will be kept updated by the Stanford Gordian Knot Center for National Security Innovation.

In the meantime send updates, corrections and comments to sblank@stanford.edu

Credit Where Credit Is Due
Clearly, the U.S. government intends to communicate this information. They have published links to DoD organizations here, even listing DoD social media accounts. But the list is fragmented and irregularly updated. Consequently, this type of directory has not existed in a usable format – until now.

Security Clearances at the Speed of Startups

Imagine you got a job offer from a company but weren’t allowed to start work – or get paid – for almost a year. And if you can’t pass a security clearance your offer is rescinded. Or you get offered an internship but can’t work on the most interesting part of the project. Sounds like a nonstarter. Well that’s the current process if you want to work for companies or government agencies that work on classified programs.


One Silicon Valley company, Palantir, is trying to change that and shorten the time between getting hired and doing productive work. Here’s why and how.

Over the last five years more of my students have understood that Russia’s brutal war in Ukraine and strategic competition with the People’s Republic of China mean that the world is no longer a stable and safe place. This has convinced many of them to work on national security problems in defense startups.

However, many of those companies and government agencies require you to work on projects with sensitive information the government wants to protect. These are called classified programs. To get hired, and to work on them, you need to first pass a government security clearance. (A security clearance is how the government learns whether you are trustworthy enough to keep secrets and not damage national security.)

For jobs at most defense startups/contractors or national security agencies, instead of starting work with your offer letter, you’d instead receive a “conditional” job offer – that’s a fancy way to say, “we want you to work here, but you need to wait 3 to 9 months without pay until you start, and if you can’t pass the security clearance we won’t hire you.” That’s a pretty high bar for students who have lots of other options for where to work.

Types of Security Clearances
The time it takes for the clearance process depends on the thoroughness and how deeply the government investigates your background. That’s directly related to how classified will be the work you will be doing. The three primary levels of classification (from least to greatest) are Confidential, Secret, and Top Secret. The type and depth of background investigations to get a security clearance depends on what level of classified information you will be working with. For example, if you just need access to Confidential or Secret material they would do a National Agency Check with Law and Credit (NACLC). The government will look at the FBI’s criminal history repository, do a credit check, and a check with your local law enforcement agencies. This can take a relatively short time (~3 months).

On the other hand if you’re going to work on a Top Secret/SCI project, this requires a more extensive (and much longer ~6-9 months) background check called a Single Scope Background Investigation (SSBI). Some types of clearances also require you to take a polygraph (lie-detector) test.

How Does the Government “Clear” you?
The National Background Investigation Services (NBIS) is the government agency that will investigate your background. They will ask about your:

  • Drugs and Alcohol (hard drugs, addiction, chronic drinking, etc.)
  • Criminal conduct (felonies..)
  • Financial stability (they’ll run a Credit Bureau Report)
  • How you’ve used IT systems (e.g. have you hacked any?)
  • United States allegiance
  • Foreign influence (do you own property overseas? Foreign investments, etc.)
  • Psychological conditions and personal behavior.
  • Travel History (have you lived or gone to China, Russia, Iran, North Korea, Syria, etc.)
  • Plus, they will talk to your friends, relatives, current and ex-significant others, etc. to learn more about you

Palantir’s Accelerated Student Clearance Plan
Palantir wants their interns and new hires to hit the ground running and work on the toughest and most interesting government problems from day one. However, these types of problems require having a security clearance. The problem is that today, all companies start an application for a security clearance the day you show up for work.

Palantir’s idea? If you get an internship or full-time offer from Palantir while you’re still in school, they will immediately employ you as a contractor. This will let them start your security clearance process while in school before you show up for work. That means you will be cleared ~9 months later in time for your first day on the job. Think of this like a college early admissions program. (If you’re interning, Palantir will hold your clearance for you if you come back to Palantir the following year.)

Why Do This?
Obviously this is a long-term strategic investment in Palantir’s college talent, but it also affects the entire defense ecosystem – to create a broader team of America’s best engineers who are able to support our country’s most critical missions. And they are encouraging other Defense Tech companies to implement a similar program.

I think it’s a great idea.

Now what are the other innovative ideas Silicon Valley can do to attract a national security workforce?

Why Large Organizations Struggle With Disruption, and What to Do About It

Seemingly overnight, disruption has allowed challengers to threaten the dominance of companies and government agencies as many of their existing systems have now been leapfrogged. How an organization reacts to this type of disruption determines whether they adapt or die.


I’ve been working with a large organization whose very existence is being challenged by an onslaught of technology (AI, autonomy, quantum, cyberattacks, access to space, et al) from aggressive competitors, both existing and new. These competitors are deploying these new technologies to challenge the expensive (and until now incredibly effective) legacy systems that this organization has built for decades. (And they are doing it at speed that looks like a blur to this organization.) But the organization is also challenged by the inaction of its own leaders, who cannot let go of the expensive systems and suppliers they built over decades. It’s a textbook case of the Innovators Dilemma.

In the commercial world creative destruction happens all the time. You get good, you get complacent, and eventually you get punched in the face. The same holds true for Government organizations, albeit with more serious consequences.

This organization’s fate is not yet sealed. Inside it, I’ve watched incredibly innovative groups create autonomous systems and software platforms that rival anything a startup is doing. They’ve found champions in the field organizations, and they’ve run experiments with them. They’ve provided evidence that their organization could adapt to the changing competitive environment and even regain the lead. Simultaneously, they’ve worked with outside organizations to complement and accelerate their internal offerings. They’re on the cusp of a potential transformation – but leadership hesitates to make substantive changes.

The “Do Nothing” Feedback Loop
I’ve seen this play out time and again in commercial and government organizations. There’s nothing more frustrating for innovators than to watch their organization being disrupted while its senior leaders hesitate to take more than token actions. On the other hand, no one who leads a large organization wants it to go out of business. So, why is adapting to changed circumstances so hard for existing organizations?

The answer starts at the top. Responding to disruption requires action from senior leadership: e.g. the CEO, board, Secretary, etc. Fearful that a premature pivot can put their legacy business or forces at risk, senior leaders delay deciding – often until it’s too late.

My time with this organization helped me appreciate why adopting and widely deploying something disruptive is difficult and painful in companies and government agencies. Here are the reasons:

Disconnected Innovators – Most leaders of large organizations are not fluent in the new technologies and the disruptive operating concepts/business models they can create. They depend on guidance from their staff and trusted advisors – most of whom have been hired and promoted for their expertise in delivering incremental improvements in existing systems. The innovators in their organization, by contrast, rarely have direct access to senior leaders. Innovators who embrace radically new technologies and concepts that challenge the status quo and dogma are not welcomed, let alone promoted, or funded.

Legacy The organization I’ve been working with, like many others, has decades of investment in existing concepts, systems, platforms, R&D labs, training, and a known set of external contractors. Building and sustaining their existing platforms and systems has left little money for creating and deploying new ones at the same scale (problems that new entrants/adversaries may not have.) Advocating that one or more of their platforms or systems are at risk or may no longer be effective is considered heresy and likely the end of a career.

The Frozen Middle” – A common refrain I hear from innovators in large organizations is that too many people are resistant to change (“they just don’t get it”.) After seeing this behavior for decades, I’ve learned that the frozen middle occurs because of what’s called theSemmelweis effect” – the unconscious tendency of people to stick to preexisting beliefs and reject new ideas that contradict them – because it undermines their established norms and/or beliefs. (They really don’t get it.) This group is most comfortable sticking with existing process and procedures and hires and promotes people who execute the status quo. This works well when the system can continue to succeed with incremental growth, but in the face of more radical change, this normal human reaction shuts out new learning and limits an organizations’ ability to rapidly adapt to new circumstances. The result is organizational blinders and frustrated innovators. And you end up with world-class people and organizations for a world that no longer exists.

Not everyone is affected by the Semmelweis effect. It’s often mid-grade managers / officers in this same “middle” who come up with disruptive solutions and concepts. However, unless they have senior champions (VP’s, Generals / Admirals) and are part of an organization with a mission to solve operational problems, these solutions die. These innovators lack alternate places where the culture encourages and funds experimentation and non-consensus ideas. Ironically, organizations tend to chase these employees out because they don’t conform, or if forced to conform, they grow disillusioned and leave for more innovative work in industry.

Hubris is managerial behavior of overconfidence and complacency. Unlike the unconscious Semmelweis effect, this is an active and conscious denial of facts. It occurs as some leaders/managers believe change threatens their jobs as decision-makers or that new programs, vendors or ideas increase the risk of failure, which may hurt their image and professional or promotional standing.

In the organization I’ve been working with, the internal engineering group offers senior leaders reassurances that they are responding to disruption by touting incremental upgrades to their existing platforms and systems.

Meanwhile because their budget is a zero-sum game, they starve innovators of funds and organizational support for deployment of disruptive new concepts at scale. The result is “innovation theater.” In the commercial world this behavior results in innovation demos but no shipping products and a company on the path to irrelevance or bankruptcy. In the military it’s demos but no funding for deployments at scale.

Fear of Failure/Risk Aversion – Large organizations are built around repeatable and scalable processes that are designed to be “fail safe.” Here new initiatives need to match existing budgeting, legal, HR and acquisition, processes and procedures. However, disruptive projects can only succeed in organizations that have a “safe-to-fail” culture. This is where learning and discovery happens via incremental and iterative experimentation with a portfolio of new ideas and failure is considered part of the process. “Fail safe” versus “safe-to-fail” organizations need to be separate and require different culture, different people, different development processes and risk tolerance.

Activist Investors Kill Transformation in Commercial Companies
A limit on transformation speed unique to commercial organizations is the fear of “Activist Investors.”  “Activist investors” push public companies to optimize short-term profit, by avoiding or limiting major investments in new opportunities and technology. When these investors gain control of a company, innovation investments are reduced, staff is cut, factories and R&D centers closed, and profitable parts of the company and other valuable assets sold.

Unique Barriers for Government Organizations
Government organizations face additional constraints that make them even slower to respond to change than large companies.

To start, leaders of the largest government organizations are often political appointees. Many have decades of relevant experience, but others are acting way above their experience level. This kind of mismatch tends to happen more frequently in government than in private industry.

Leaders’ tenures are too short All but a few political appointees last only as long as their president in the White House, while leaders of programs and commands in the military services often serve 2- or 3-year tours. This is way too short to deeply understand and effectively execute organizational change. Because most government organizations lack a culture of formal innovation doctrine or playbook – a body of knowledge that establishes a common frame of reference and common professional language – institutional learning tends to be ephemeral rather than enduring. Little of the knowledge, practices, shared beliefs, theory, tactics, tools, procedures, language, and resources that the organization built under the last leader gets forwarded. Instead each new leader relearns and imposes their own plans and policies.

Getting Along Gets Rewarded – Career promotion in all services is primarily driven by “getting along” with the status quo. This leads to things like not cancelling a failing program, not looking for new suppliers who might be cheaper/ better/ more responsive, pursuing existing force design and operating concepts even when all available evidence suggests they’re no longer viable, selecting existing primes/contractors, or not pointing out that a major platform or weapon is no longer effective. The incentives are to not take risks. Doing so is likely the end of a career. Few get promoted for those behaviors. This discourages non-consensus thinking. Yet disruption requires risk.

Revolving doors – Senior leaders leave government service and go to work for the very companies whose programs they managed, and who they had purchased systems from (often Prime contractors). The result is that few who contemplate leaving the service and want a well-paying job with a contractor will hold them to account or suggest an alternate vendor while in the service.

Prime Contractors are one of our nation’s greatest assets while being our greatest obstacles to disruptive change. In the 20th century platforms/weapons were mostly hardware with software components. In the 21st century, platforms/weapons are increasingly software with hardware added. Most primes still use Waterfall development with distinct planning, design, development, and testing phases rather than Agile (iterative and incremental development with daily software releases). The result is that primes have a demonstrated inability to deliver complex systems on time. (Moving primes to software upgradable systems/or cloud-based breaks their financial model.)

As well, prime contractors typically have a “lock” on existing government contracts. That’s because it’s less risky for acquisition officials to choose them for follow-on work– and primes have decades of experience in working through the byzantine and complex government purchasing process; and they have tons of people and money to influence all parts of the government acquisition system—from the requirements writers to program managers, to congressional staffers to the members of the Armed Services and Appropriations committees. New entrants have little chance to compete.

Congress – Lawmakers have incentives to support the status quo but few inducements to change it. Congress has a major say in what systems and platforms suppliers get used, with a bias to the status quo. To keep their own jobs, lawmakers shape military appropriations bills to support their constituents’ jobs and to attract donations from the contractors who hire them. (They and their staffers are also keeping the revolving door in mind for their next job.) Many congressional decisions that appear in the National Defense Authorization Act (NDAA) and in appropriations are to support companies that provide the most jobs in their districts and the most funds for their reelection. These come from the Prime contractors.

What to Do About It?
It starts at the top. Confronted with disruptive threats, senior leaders must actively work to understand:

  • The timing of the threat – disruption never comes with a memo, and when it happens its impact is exponential. When will disruption happen that will make our core business or operating concepts/force design obsolete? Will our competitors get there first?
  • The magnitude of the threat – will this put a small part of our business/capabilities at risk or will it affect our entire organization?
  • The impact of the threat – will this have a minor impact or does it threaten the leadership or the very existence of the organization. What happens if our competitors/adversaries adopt this first?
  • The response to the threat- Small experiments, department transformation, and company or organization-wide transformation – and its timeline.

Increase Visibility of Disruptive Tech and Concepts/Add Outside Opinions

  • To counter disruptive threats, the typical reporting relationship of innovators filtered through multiple layers of management must be put aside.
    • Senior leaders need a direct and unfiltered pipeline to their internal innovation groups for monthly updates and demos of evidenced-based experiments in operational settings.
    • And the new operating concepts to go with it.
  • Create a “Red Team” of advisors from outside their organization.
    • This group should update senior leaders on the progress of competitors
    • And offer unbiased assessment of their own internal engineering/R&D progress.
  • Stand up a strategic studies group that can develop new business models/ new strategic concepts usable at the operational level – ensure its connection with external sources of technical innovation
  • Create a “sensing” and “response” organization that takes actual company/agency/service problems out to VC’s and startups and seeing how they would solve them
    • However, unless senior leaders 1) actively make a point of seeing these first hand (at least biannually), and have the mechanism to “respond” with purchase orders/ OTA’s, this effort will have little impact.

Actively and Urgently Gather Evidence

  • Run real-world experiments – simulations, war games, – using disruptive tech and operating concepts (in offense and defense.)
  • See and actively seek out the impact of disruption in adjacent areas e.g. AI’s impact on protein modeling, drones in the battlefield and Black Sea in Ukraine, et al.
  • Ask the pointy end of the organization (e.g the sales force, fleet admirals) if they are willing to take more risk on new capabilities.

These activities need happen in months not years. Possible recommendations from these groups include do nothing, run small experiments, transform a single function or department, or a company or organization-wide transformation.

What Does Organization-wide Transformation look like?

  • What outcome do we desire?
  • When do we need it?
  • What budget, people, capital equipment are needed?
    • What would need to be divested?
  • How to communicate this to all stakeholders and get them aligned?
  • In the face of disruption/ crisis/ wartime advanced R&D groups now need a seat at the table with budgets sufficient for deployment at scale.
  • Finally, encourage more imagination. How can we use partners and other outside resources for technology and capital?

Examples of leaders who transformed their organization in the face of disruption include Microsoft CEO Satya Nadella and Steve Jobs from Apple, in defense, Bill Perry, Harold Brown and Ash Carter. Each dealt with disruption with acceptance, acknowledgment, imagination and action.

Much more to be said about transformation in future posts.