Teaching National Security Policy with AI

The videos embedded in this post are best viewed on steveblank.com

International Policy students will be spending their careers in an AI-enabled world. We wanted our students to be prepared for it. This is why we’ve adopted and integrated AI in our Stanford national security policy class – Technology, Innovation and Great Power Competition.

Here’s what we did, how the students used it, and what they (and we) learned.


Technology, Innovation and Great Power Competition is an international policy class at Stanford (taught by me, Eric Volmar and Joe Felter.) The course provides future policy and engineering leaders with an appreciation of the geopolitics of the U.S. strategic competition with great power rivals and the role critical technologies are playing in determining the outcome.

This course includes all that you would expect from a Stanford graduate-level class in the Masters in International Policy – comprehensive readings, guest lectures from current and former senior policy officials/experts, and deliverables in the form of written policy papers. What makes the class unique is that this is an experiential policy class. Students form small teams and embark on a quarter-long project that got them out of the classroom to:

  • select a priority national security challenge, and then …
  • validate the problem and propose a detailed solution tested against actual stakeholders in the technology and national security ecosystem

The class combines multiple teaching tools.

  • Real world – Students worked in teams on real problems from government sponsors
  • Experiential – They get out of the building to interview 50+ stakeholders
  • Perspectives – They get policy context and insights from lectures by experts
  • And this year… Using AI to Accelerate Learning

Rationale for AI
Using this quarter to introduce AI we had three things going for us: 1) By fall 2024 AI tools were good and getting exponentially better, 2) Stanford had set up an AI Playground enabling students to use a variety of AI Tools (ChatGPT, Claude, Perplexity, NotebookLM, Otter.ai, Mermaid, Beautiful.ai, etc.) and 3) many students were using AI in classes but it was usually ambiguous about what they were allowed to do.

Policy students have to read reams of documents weekly. Our hypotheses was that our student teams could use AI to ingest and summarize content, identify key themes and concepts across the content, provide an in-depth analysis of critical content sections, and then synthesize and structure their key insights and apply their key insights to solve their specific policy problem.  They did all that, and much, much, more.

While Joe Felter and I had arm-waved “we need to add AI to the class” Eric Volmar was the real AI hero on the teaching team. As an AI power user Eric was most often ahead of our students on AI skills. He threw down a challenge to the students to continually use AI creatively and told them that they would be graded on it. He pushed them hard on AI use in office hours throughout the quarter. The results below speak for themselves.

If you’re not familiar with these AI tools in practice it’s worth watching these one minute videos.

Team OSC
Team OSC was trying to understand what is the appropriate level of financial risk for the U.S. Department of Defense to provide loans or loan guarantees in technology industries?

The team started using AI to do what we had expected, summarizing the stack of weekly policy documentsusing Claude 3.5. And like all teams, the unexpected use of AI was to create new leads for their stakeholder interviews. They found that they could ask AI to give them a list of leaders that were involved in similar programs, or that were involved in their program’s initial stages of development.

See how Team OSC summarized policy papers here:

If you can’t see the video click here

Claude was also able to create a list of leaders with the Department of Energy Title17 credit programs, Exim DFC, and other federal credit programs that the team should interview. In addition, it created a list of leaders within Congressional Budget Office and the Office of Management and Budget that would be able to provide insights. See the demo here:

If you can’t see the video click here
The team also used AI to transcribe podcasts. They noticed that key leaders of the organizations their problem came from had produced podcasts and YouTube videos. They used Otter.ai to transcribe these. That provided additional context for when they did interview them and allowed the team to ask insightful new questions.

If you can’t see the video click here

Note the power of fusing AI with interviews. The interviews ground the knowledge in the teams lived experience.

The team came up with a use case the teaching team hadn’t thought of – using AI to critique the team’s own hypotheses. The AI not only gave them criticism but supported it with links from published scholars. See the demo here:

If you can’t see the video click here

Another use the teaching team hadn’t thought was using Mermaid AI to create graphics for their weekly presentations. See the demo here:

If you can’t see the video click here

The surprises from this team kept coming. Their last was that the team used Beautiful.ai in order to generate PowerPoint presentations. See the demo here:

If you can’t see the video click here

For all teams, using AI tools was a learning/discovery process all its own. By and large, students were largely unfamiliar with most tools on day 1.

Team OSC suggested that students should start using AI tools early in the quarter and experiment with tools like ChatGPT, Otter.ai. Tools that that have steep learning curves, like Mermaid should be started at the very start of the project to train their models.

Team OSC AI tools summary: AI tools are not perfect, so make sure to cross check summaries, insights and transcriptions for accuracy and relevancy. Be really critical of their outputs. The biggest takeaway is that AI works best when prepared with human efforts.

Team FAAST
The FAAST team was trying to understand how can the U.S. improve and scale the DoE FASST program in the urgent context of great power competition?

Team FAAST started using AI to do what we had expected, summarizing the stack of weekly policy documents they were assigned to read and synthesizing interviews with stakeholders.

One of the features of ChatGPT this team appreciated, and important for a national security class, was the temporary chat feature –  data they entered would not be used to train the open AI models. See the demo below.

If you can’t see the video click here

The team used AI do a few new things we didn’t expect –  to generate emails to stakeholders and to create interview questions. During the quarter the team used ChatGPT, Claude, Perplexity, and NotebookLM. By the end of the 10-week class they were using AI to do a few more things we hadn’t expected. Their use of AI expanded to include simulating interviews. They gave ChatGPT specific instructions on who they wanted it to act like, and it provided personalized and custom answers. See the example here.

If you can’t see the video click here

Learning-by-doing was a key part of this experiential course. The big idea is that students learn both the method and the subject matter together. By learning it together, you learn both better.

Finally, they used AI to map stakeholders, get advice on their next policy move, and asked ChatGPT to review their weekly slides (by screenshotting the slides and putting them into ChatGPT and asking for feedback and advice.)

The FAAST team AI tool summary: ChatGPT was specifically good when using images or screenshots, so in these multi-level tasks, and when you wanted to use kind of more custom instructions, as we used for the stakeholder interviews.  Claude was better at more conversational and human in writing, so used it when sending emails. Perplexity was better for researchers because it provides citations, so you’re able to access the web and actually get directed to the source that it’s citing. NotebookLM was something we tried out, but it was not as successful. It was a cool tool that allowed us to summarize specific policy documents into a podcast, but the summaries were often pretty vague.

Team NSC Energy
Team NSC Energy was working on a National Security Council problem, “How can the United States generate sufficient energy to support compute/AI in the next 5 years?”

At the start of the class, the team began by using ChatGPT to summarize their policy papers and generate tailored interview questions, while Claude was used to synthesize research  for background understanding. As ChatGPT occasionally hallucinated information, by the end of the class they were cross validating the summaries via Perplexity pro.

The team also used ChatGPT and Mermaid to organize their thoughts and determine who they wanted to talk to. ChatGPT was used to generate code to put into the Mermaid flowchart organizer. Mermaid has its own language, so ChatGPT was helpful, so we didn’t have to learn all the syntax for this language.
See the video of how Team NSC Energy used ChaptGPT and Mermaid here:

If you can’t see the video click here

Team Alpha Strategy
The Alpha Strategy team was trying to discover whether the U.S. could use AI to create a whole-of-government decision-making factory.

At the start of class, Team Alpha Strategy used ChatGPT.40 for policy document analysis and summary, as well for stakeholder mapping. However, they discovered going one by one through the countless numbers of articles was time consuming. So the team pivoted to using Notebook LM, for document search and cross analysis. See the video of how Team Alpha Strategy used Notebook LM here:

If you can’t see the video click here

The other tools the team used were custom Gpts to build stakeholder maps and diagrams and organize interview notes. There’s going to be a wide variety of specialized Gpts. One that was really helpful, they said, was a scholar GPT.
See the video of how Team Alpha Strategy used custom GPTs:

If you can’t see the video click here

Like other teams, Alpha Strategy used ChatGPT to summarize their interview notes and to create flow charts to paste into their weekly presentations.

Team Congress
The Congress team was exploring the question, “if the Department of Defense were given economic instruments of power, which tools would be most effective in the current techno-economic competition with the People’s Republic of China?”

As other teams found, Team Congress first used ChatGPT to extract key themes from hundreds of pages of readings each week and from press releases, articles, and legislation. They also used for mapping and diagramming to identify potential relationships between stakeholders, or to creatively suggest alternate visualizations.

When Team Congress weren’t able to reach their sponsor in the initial two weeks of the class, much like Team OSC, they used AI tools to pretend to be their sponsor, a member of the defense modernization caucus. Once they realized its utility, they continued to do mock interviews using AI role play.

The team also used customized models of ChatGPT but in their case found that this was limited in the number of documents they could upload, because they had a lot of content. So they used retrieval augmented generation, which takes in a user’s query, and matches it with relevant sources in their knowledge base, and fed that back out as the output. See the video of how Team Congress used retrieval augmented generation here:

If you can’t see the video click here

Team NavalX
The NavalX team was learning how the U.S. Navy could expand its capabilities in Intelligence, Surveillance, and Reconnaissance (ISR) operations on general maritime traffic.

Like all teams they used ChatGPT to summarize and extract from long documents, organizing their interview notes, and defining technical terms associated with their project. In this video, note their use of prompting to guide ChatGPT to format their notes.

See the video of how Team NavalX used tailored prompts for formatting interview notes here:

If you can’t see the video click here

They also asked ChatGPT to role play a critic of our argument and solution so that we could find the weaknesses. They also began uploading many interviews at once, and asked Claude to find themes or ideas in common that they might have missed on their own.

Here’s how the NavalX team used Perplexity for research.

If you can’t see the video click here
Like other teams, the NavalX team discovered you can customize ChatGPT by telling it how you want it to act.

If you can’t see the video click here

Another surprising insight from the team is that you can use ChatGPT to tell you how to write better prompts for itself.

If you can’t see the video click here
In summary, Team NavalX used Claude to translate texts from Mandarin, and found that ChatGPT was the best for writing tasks, Perplexity the best for research tasks, Claude the best for reading tasks, and notebook LM was the best for summarization.

Lessons Learned

  • Integrating AI into this class took a dedicated instructor with a mission to create a new way to teach using AI tools
  • The result was AI vastly enhanced and accelerated learning of all teams
    • It acted as a helpful collaborator
    • Fusing AI with stakeholders interviews was especially powerful
  • At the start of the class students were familiar with a few of these AI tools
    • By the end of the class they were fluent in many more of them
    • Most teams invented creative use cases
  • All Stanford classes we now teach – Hacking for Defense, Lean Launchpad, Entrepreneurship Inside Government – have AI integrated as part of the course
  • Next year’s AI tools will be substantively better

How To Find Your Customer In the Dept of Defense – The Directory of DoD Program Executive Offices

Finding a customer for your product in the Department of Defense is hard: Who should you talk to? How do you get their attention?

Looking for DoD customers

How do you know if they have money to spend on your product?

It almost always starts with a Program Executive Office.


The Department of Defense (DoD) no longer owns all the technologies, products and services to deter or win a war – e.g.  AI, autonomy, drones, biotech, access to space, cyber, semiconductors, new materials, etc.

Today, a new class of startups are attempting to sell these products to the Defense Department. Amazingly, there is no single DoD-wide phone book available to startups of who to call in the Defense Department.

So I wrote one.

Think of the PEO Directory linked below as a “Who buys in the government?” phone book.

The DoD buys hundreds of billions of dollars of products and services per year, and nearly all of these purchases are managed by Program Executive Offices. A Program Executive Office may be responsible for a specific program (e.g., the Joint Strike Fighter) or for an entire portfolio of similar programs (e.g., the Navy Program Executive Office for Digital and Enterprise Services). PEOs define requirements and their Contracting Officers buy things (handling the formal purchasing, issuing requests for proposals (RFPs), and signing contracts with vendors.) Program Managers (PMs) work with the PEO and manage subsets of the larger program.

Existing defense contractors know who these organizations are and have teams of people tracking budgets and contracts. But startups?  Most startups don’t have a clue where to start.

This is a classic case of information asymmetry and it’s not healthy for the Department of Defense or the nascent startup defense ecosystem.

That’s why I put this PEO Directory together.

This first version of the directory lists 75 Program Executive Offices and their Program Executive Officers and Program/Project Managers.

Each Program Executive Office is headed by a Program Executive Officer who is a high ranking official – either a member of the military or a high ranking civilian – responsible for the cost, schedule, and performance of a major system, or portfolio of systems, some worth billions of dollars.

Below is a summary of 75 Program Executive Offices in the Department of Defense.

You can download the full 64-page document of Program Executive Offices and Officers with all 602 names here.

Caveats
Do not depend on this document for accuracy or completeness.
It is likely incomplete and contains errors.
Military officers typically change jobs every few years.
Program Offices get closed and new ones opened as needed.

This means this document was out of date the day it was written. Still it represents an invaluable starting point for startups looking to work with DoD.

How to Use The PEO Directory As Part of A Go-To-Market Strategy
While it’s helpful to know what Program Executive Offices exist and who staffs them, it’s even better to know where the money is, what it’s being spent on, and whether the budget is increasing, decreasing, or remaining the same.

The best place to start is by looking through an overview of the entire defense budget here. Then search for those programs in the linked PEO Directory. You can get an idea whether that program has $ Billions, or $ Millions.

Next, take a look at the budget documents released by the DoD Comptroller –
particularly the P-1 (Procurement) and R-1 (R&D) budget documents.

Combining the budget document with this PEO directory helps you narrow down which of the 75 Program Executive Offices and 500+ program managers to call on.

With some practice you can translate the topline, account, or Program Element (PE) Line changes into a sales Go-To-Market strategy, or at least a hypothesis of who to call on.

Armed with the program description (it’s full of jargon and 9-12 months out of date) and the Excel download here and the Appendix here –– you can identify targets for sales calls with DoD where your product has the best chance of fitting in.

The people and organizations in this list change more frequently than the money.

Knowing the people is helpful only after you understand their priorities — and money is the best proxy for that.

Future Work
Ultimately we want to give startups not only who to call on, and who has the money, but which Program Offices are receptive to new entrants. And which have converted to portfolio management, which have tried OTA contracts, as well as highlighting those who are doing something novel with metrics or outcomes.

Going forward this project will be kept updated by the Stanford Gordian Knot Center for National Security Innovation.

In the meantime send updates, corrections and comments to sblank@stanford.edu

Credit Where Credit Is Due
Clearly, the U.S. government intends to communicate this information. They have published links to DoD organizations here, even listing DoD social media accounts. But the list is fragmented and irregularly updated. Consequently, this type of directory has not existed in a usable format – until now.

Security Clearances at the Speed of Startups

Imagine you got a job offer from a company but weren’t allowed to start work – or get paid – for almost a year. And if you can’t pass a security clearance your offer is rescinded. Or you get offered an internship but can’t work on the most interesting part of the project. Sounds like a nonstarter. Well that’s the current process if you want to work for companies or government agencies that work on classified programs.


One Silicon Valley company, Palantir, is trying to change that and shorten the time between getting hired and doing productive work. Here’s why and how.

Over the last five years more of my students have understood that Russia’s brutal war in Ukraine and strategic competition with the People’s Republic of China mean that the world is no longer a stable and safe place. This has convinced many of them to work on national security problems in defense startups.

However, many of those companies and government agencies require you to work on projects with sensitive information the government wants to protect. These are called classified programs. To get hired, and to work on them, you need to first pass a government security clearance. (A security clearance is how the government learns whether you are trustworthy enough to keep secrets and not damage national security.)

For jobs at most defense startups/contractors or national security agencies, instead of starting work with your offer letter, you’d instead receive a “conditional” job offer – that’s a fancy way to say, “we want you to work here, but you need to wait 3 to 9 months without pay until you start, and if you can’t pass the security clearance we won’t hire you.” That’s a pretty high bar for students who have lots of other options for where to work.

Types of Security Clearances
The time it takes for the clearance process depends on the thoroughness and how deeply the government investigates your background. That’s directly related to how classified will be the work you will be doing. The three primary levels of classification (from least to greatest) are Confidential, Secret, and Top Secret. The type and depth of background investigations to get a security clearance depends on what level of classified information you will be working with. For example, if you just need access to Confidential or Secret material they would do a National Agency Check with Law and Credit (NACLC). The government will look at the FBI’s criminal history repository, do a credit check, and a check with your local law enforcement agencies. This can take a relatively short time (~3 months).

On the other hand if you’re going to work on a Top Secret/SCI project, this requires a more extensive (and much longer ~6-9 months) background check called a Single Scope Background Investigation (SSBI). Some types of clearances also require you to take a polygraph (lie-detector) test.

How Does the Government “Clear” you?
The National Background Investigation Services (NBIS) is the government agency that will investigate your background. They will ask about your:

  • Drugs and Alcohol (hard drugs, addiction, chronic drinking, etc.)
  • Criminal conduct (felonies..)
  • Financial stability (they’ll run a Credit Bureau Report)
  • How you’ve used IT systems (e.g. have you hacked any?)
  • United States allegiance
  • Foreign influence (do you own property overseas? Foreign investments, etc.)
  • Psychological conditions and personal behavior.
  • Travel History (have you lived or gone to China, Russia, Iran, North Korea, Syria, etc.)
  • Plus, they will talk to your friends, relatives, current and ex-significant others, etc. to learn more about you

Palantir’s Accelerated Student Clearance Plan
Palantir wants their interns and new hires to hit the ground running and work on the toughest and most interesting government problems from day one. However, these types of problems require having a security clearance. The problem is that today, all companies start an application for a security clearance the day you show up for work.

Palantir’s idea? If you get an internship or full-time offer from Palantir while you’re still in school, they will immediately employ you as a contractor. This will let them start your security clearance process while in school before you show up for work. That means you will be cleared ~9 months later in time for your first day on the job. Think of this like a college early admissions program. (If you’re interning, Palantir will hold your clearance for you if you come back to Palantir the following year.)

Why Do This?
Obviously this is a long-term strategic investment in Palantir’s college talent, but it also affects the entire defense ecosystem – to create a broader team of America’s best engineers who are able to support our country’s most critical missions. And they are encouraging other Defense Tech companies to implement a similar program.

I think it’s a great idea.

Now what are the other innovative ideas Silicon Valley can do to attract a national security workforce?

Why Large Organizations Struggle With Disruption, and What to Do About It

Seemingly overnight, disruption has allowed challengers to threaten the dominance of companies and government agencies as many of their existing systems have now been leapfrogged. How an organization reacts to this type of disruption determines whether they adapt or die.


I’ve been working with a large organization whose very existence is being challenged by an onslaught of technology (AI, autonomy, quantum, cyberattacks, access to space, et al) from aggressive competitors, both existing and new. These competitors are deploying these new technologies to challenge the expensive (and until now incredibly effective) legacy systems that this organization has built for decades. (And they are doing it at speed that looks like a blur to this organization.) But the organization is also challenged by the inaction of its own leaders, who cannot let go of the expensive systems and suppliers they built over decades. It’s a textbook case of the Innovators Dilemma.

In the commercial world creative destruction happens all the time. You get good, you get complacent, and eventually you get punched in the face. The same holds true for Government organizations, albeit with more serious consequences.

This organization’s fate is not yet sealed. Inside it, I’ve watched incredibly innovative groups create autonomous systems and software platforms that rival anything a startup is doing. They’ve found champions in the field organizations, and they’ve run experiments with them. They’ve provided evidence that their organization could adapt to the changing competitive environment and even regain the lead. Simultaneously, they’ve worked with outside organizations to complement and accelerate their internal offerings. They’re on the cusp of a potential transformation – but leadership hesitates to make substantive changes.

The “Do Nothing” Feedback Loop
I’ve seen this play out time and again in commercial and government organizations. There’s nothing more frustrating for innovators than to watch their organization being disrupted while its senior leaders hesitate to take more than token actions. On the other hand, no one who leads a large organization wants it to go out of business. So, why is adapting to changed circumstances so hard for existing organizations?

The answer starts at the top. Responding to disruption requires action from senior leadership: e.g. the CEO, board, Secretary, etc. Fearful that a premature pivot can put their legacy business or forces at risk, senior leaders delay deciding – often until it’s too late.

My time with this organization helped me appreciate why adopting and widely deploying something disruptive is difficult and painful in companies and government agencies. Here are the reasons:

Disconnected Innovators – Most leaders of large organizations are not fluent in the new technologies and the disruptive operating concepts/business models they can create. They depend on guidance from their staff and trusted advisors – most of whom have been hired and promoted for their expertise in delivering incremental improvements in existing systems. The innovators in their organization, by contrast, rarely have direct access to senior leaders. Innovators who embrace radically new technologies and concepts that challenge the status quo and dogma are not welcomed, let alone promoted, or funded.

Legacy The organization I’ve been working with, like many others, has decades of investment in existing concepts, systems, platforms, R&D labs, training, and a known set of external contractors. Building and sustaining their existing platforms and systems has left little money for creating and deploying new ones at the same scale (problems that new entrants/adversaries may not have.) Advocating that one or more of their platforms or systems are at risk or may no longer be effective is considered heresy and likely the end of a career.

The Frozen Middle” – A common refrain I hear from innovators in large organizations is that too many people are resistant to change (“they just don’t get it”.) After seeing this behavior for decades, I’ve learned that the frozen middle occurs because of what’s called theSemmelweis effect” – the unconscious tendency of people to stick to preexisting beliefs and reject new ideas that contradict them – because it undermines their established norms and/or beliefs. (They really don’t get it.) This group is most comfortable sticking with existing process and procedures and hires and promotes people who execute the status quo. This works well when the system can continue to succeed with incremental growth, but in the face of more radical change, this normal human reaction shuts out new learning and limits an organizations’ ability to rapidly adapt to new circumstances. The result is organizational blinders and frustrated innovators. And you end up with world-class people and organizations for a world that no longer exists.

Not everyone is affected by the Semmelweis effect. It’s often mid-grade managers / officers in this same “middle” who come up with disruptive solutions and concepts. However, unless they have senior champions (VP’s, Generals / Admirals) and are part of an organization with a mission to solve operational problems, these solutions die. These innovators lack alternate places where the culture encourages and funds experimentation and non-consensus ideas. Ironically, organizations tend to chase these employees out because they don’t conform, or if forced to conform, they grow disillusioned and leave for more innovative work in industry.

Hubris is managerial behavior of overconfidence and complacency. Unlike the unconscious Semmelweis effect, this is an active and conscious denial of facts. It occurs as some leaders/managers believe change threatens their jobs as decision-makers or that new programs, vendors or ideas increase the risk of failure, which may hurt their image and professional or promotional standing.

In the organization I’ve been working with, the internal engineering group offers senior leaders reassurances that they are responding to disruption by touting incremental upgrades to their existing platforms and systems.

Meanwhile because their budget is a zero-sum game, they starve innovators of funds and organizational support for deployment of disruptive new concepts at scale. The result is “innovation theater.” In the commercial world this behavior results in innovation demos but no shipping products and a company on the path to irrelevance or bankruptcy. In the military it’s demos but no funding for deployments at scale.

Fear of Failure/Risk Aversion – Large organizations are built around repeatable and scalable processes that are designed to be “fail safe.” Here new initiatives need to match existing budgeting, legal, HR and acquisition, processes and procedures. However, disruptive projects can only succeed in organizations that have a “safe-to-fail” culture. This is where learning and discovery happens via incremental and iterative experimentation with a portfolio of new ideas and failure is considered part of the process. “Fail safe” versus “safe-to-fail” organizations need to be separate and require different culture, different people, different development processes and risk tolerance.

Activist Investors Kill Transformation in Commercial Companies
A limit on transformation speed unique to commercial organizations is the fear of “Activist Investors.”  “Activist investors” push public companies to optimize short-term profit, by avoiding or limiting major investments in new opportunities and technology. When these investors gain control of a company, innovation investments are reduced, staff is cut, factories and R&D centers closed, and profitable parts of the company and other valuable assets sold.

Unique Barriers for Government Organizations
Government organizations face additional constraints that make them even slower to respond to change than large companies.

To start, leaders of the largest government organizations are often political appointees. Many have decades of relevant experience, but others are acting way above their experience level. This kind of mismatch tends to happen more frequently in government than in private industry.

Leaders’ tenures are too short All but a few political appointees last only as long as their president in the White House, while leaders of programs and commands in the military services often serve 2- or 3-year tours. This is way too short to deeply understand and effectively execute organizational change. Because most government organizations lack a culture of formal innovation doctrine or playbook – a body of knowledge that establishes a common frame of reference and common professional language – institutional learning tends to be ephemeral rather than enduring. Little of the knowledge, practices, shared beliefs, theory, tactics, tools, procedures, language, and resources that the organization built under the last leader gets forwarded. Instead each new leader relearns and imposes their own plans and policies.

Getting Along Gets Rewarded – Career promotion in all services is primarily driven by “getting along” with the status quo. This leads to things like not cancelling a failing program, not looking for new suppliers who might be cheaper/ better/ more responsive, pursuing existing force design and operating concepts even when all available evidence suggests they’re no longer viable, selecting existing primes/contractors, or not pointing out that a major platform or weapon is no longer effective. The incentives are to not take risks. Doing so is likely the end of a career. Few get promoted for those behaviors. This discourages non-consensus thinking. Yet disruption requires risk.

Revolving doors – Senior leaders leave government service and go to work for the very companies whose programs they managed, and who they had purchased systems from (often Prime contractors). The result is that few who contemplate leaving the service and want a well-paying job with a contractor will hold them to account or suggest an alternate vendor while in the service.

Prime Contractors are one of our nation’s greatest assets while being our greatest obstacles to disruptive change. In the 20th century platforms/weapons were mostly hardware with software components. In the 21st century, platforms/weapons are increasingly software with hardware added. Most primes still use Waterfall development with distinct planning, design, development, and testing phases rather than Agile (iterative and incremental development with daily software releases). The result is that primes have a demonstrated inability to deliver complex systems on time. (Moving primes to software upgradable systems/or cloud-based breaks their financial model.)

As well, prime contractors typically have a “lock” on existing government contracts. That’s because it’s less risky for acquisition officials to choose them for follow-on work– and primes have decades of experience in working through the byzantine and complex government purchasing process; and they have tons of people and money to influence all parts of the government acquisition system—from the requirements writers to program managers, to congressional staffers to the members of the Armed Services and Appropriations committees. New entrants have little chance to compete.

Congress – Lawmakers have incentives to support the status quo but few inducements to change it. Congress has a major say in what systems and platforms suppliers get used, with a bias to the status quo. To keep their own jobs, lawmakers shape military appropriations bills to support their constituents’ jobs and to attract donations from the contractors who hire them. (They and their staffers are also keeping the revolving door in mind for their next job.) Many congressional decisions that appear in the National Defense Authorization Act (NDAA) and in appropriations are to support companies that provide the most jobs in their districts and the most funds for their reelection. These come from the Prime contractors.

What to Do About It?
It starts at the top. Confronted with disruptive threats, senior leaders must actively work to understand:

  • The timing of the threat – disruption never comes with a memo, and when it happens its impact is exponential. When will disruption happen that will make our core business or operating concepts/force design obsolete? Will our competitors get there first?
  • The magnitude of the threat – will this put a small part of our business/capabilities at risk or will it affect our entire organization?
  • The impact of the threat – will this have a minor impact or does it threaten the leadership or the very existence of the organization. What happens if our competitors/adversaries adopt this first?
  • The response to the threat- Small experiments, department transformation, and company or organization-wide transformation – and its timeline.

Increase Visibility of Disruptive Tech and Concepts/Add Outside Opinions

  • To counter disruptive threats, the typical reporting relationship of innovators filtered through multiple layers of management must be put aside.
    • Senior leaders need a direct and unfiltered pipeline to their internal innovation groups for monthly updates and demos of evidenced-based experiments in operational settings.
    • And the new operating concepts to go with it.
  • Create a “Red Team” of advisors from outside their organization.
    • This group should update senior leaders on the progress of competitors
    • And offer unbiased assessment of their own internal engineering/R&D progress.
  • Stand up a strategic studies group that can develop new business models/ new strategic concepts usable at the operational level – ensure its connection with external sources of technical innovation
  • Create a “sensing” and “response” organization that takes actual company/agency/service problems out to VC’s and startups and seeing how they would solve them
    • However, unless senior leaders 1) actively make a point of seeing these first hand (at least biannually), and have the mechanism to “respond” with purchase orders/ OTA’s, this effort will have little impact.

Actively and Urgently Gather Evidence

  • Run real-world experiments – simulations, war games, – using disruptive tech and operating concepts (in offense and defense.)
  • See and actively seek out the impact of disruption in adjacent areas e.g. AI’s impact on protein modeling, drones in the battlefield and Black Sea in Ukraine, et al.
  • Ask the pointy end of the organization (e.g the sales force, fleet admirals) if they are willing to take more risk on new capabilities.

These activities need happen in months not years. Possible recommendations from these groups include do nothing, run small experiments, transform a single function or department, or a company or organization-wide transformation.

What Does Organization-wide Transformation look like?

  • What outcome do we desire?
  • When do we need it?
  • What budget, people, capital equipment are needed?
    • What would need to be divested?
  • How to communicate this to all stakeholders and get them aligned?
  • In the face of disruption/ crisis/ wartime advanced R&D groups now need a seat at the table with budgets sufficient for deployment at scale.
  • Finally, encourage more imagination. How can we use partners and other outside resources for technology and capital?

Examples of leaders who transformed their organization in the face of disruption include Microsoft CEO Satya Nadella and Steve Jobs from Apple, in defense, Bill Perry, Harold Brown and Ash Carter. Each dealt with disruption with acceptance, acknowledgment, imagination and action.

Much more to be said about transformation in future posts.

Secret History – When Kodak Went to War with Polaroid

This part 2 of the Secret History of Polaroid and Edwin Land. Read part 1 for context.

Kodak and Polaroid, the two most famous camera companies of the 20th century, had a great partnership for 20+ years. Then in an inexplicable turnabout Kodak decided to destroy Polaroid’s business. To this day, every story of why Kodak went to war with Polaroid is wrong.

The real reason can be found in the highly classified world of overhead reconnaissance satellites.

Here’s the real story.


In April 1969 Kodak tore up a 20-year manufacturing partnership with Polaroid. In a surprise to everyone at Polaroid, Kodak declared war. They terminated their agreement to supply Polaroid with negative film for Polacolor – the only color film Polaroid had on the market. Kodak gave Polaroid two years’ notice but immediately raised the film price 10% in the U.S. and 50% internationally. And Kodak publicly announced they were going to make film for Polaroid’s cameras – a knife to the heart for Polaroid as film sales were what made Polaroid profitable. Shortly thereafter, Kodak announced they were also going to make instant cameras in direct competition with Polaroid cameras. In short, they were going after every part of Polaroid’s business.

What happened in April 1969 they caused Kodak to react this way?

And what was the result?

Read the sidebar for a Background on Film and Instant Photography

Today we take for granted that images can be seen and sent instantaneously on all our devices — phone, computers, tablets, etc. But that wasn’t always the case.

Film Photography
It wasn’t until the mid-19th century that it was possible to permanently capture an image. For the next 30 years photography was in the hands of an elite set of professionals. Each photo they took was captured on individual glass plates they coated with chemicals. To make a print, the photographers had to process the plates in more chemicals. Neither the cameras nor processing were within the realm of a consumer. But in 1888 Kodak changed that when they introduced a real disruptive innovation – a camera preloaded with a spool of strippable paper film with 100-exposures that consumers, rather than professional photographers, could use. When the roll was finished, the entire camera was sent back to the Kodak lab in Rochester, NY, where it was reloaded and returned to the customer while the first roll was being processed. But the real revolution happened in 1900 when Kodak introduced the Brownie camera with replaceable film spools. This made photography available to a mass market. You just sent the film to be developed, not the camera.

Up until 1936 consumer cameras captured images in black in white. That year Kodak introduced Kodachrome, the first color film for slides. In 1942, they introduced Kodacolor for prints.

While consumers now had easy-to-use cameras, the time between taking a picture and seeing the picture had a long delay. The film inside the camera needed to be developed and printed. After you clicked the shutter and took the picture, you sent the film to a drop-off point in a store. They sent your film to a large regional photo processing lab that developed the film (using a bath of chemicals), then printed the photos as physical pictures. You would get your pictures back in days or a week. (In the late 1970s, mini-photo processing labs dramatically shortened that process, offering 1-hour photo development.) Meanwhile…

Instant Photography
In 1937 Edwin Land co-founded Polaroid to make an optical filter called polarizers. They were used in photographic filters, glare-free sunglasses, and products that gave the illusion of 3-D. During WWII Polaroid made anti-glare goggles for soldiers and pilots, gun sights, viewfinders, cameras, and other optical devices with polarizing lenses.

In 1948 Polaroid pivoted. They launched what would become synonymous with an “Instant Camera.” In its first instant camera — the Model 95 – the film contained all the necessary chemicals to “instantly” develop a photo. The instant film was made of two parts – a negative sheet that lined up with a positive sheet with the chemicals in between squeezed through a set of rollers. The negative sheet was manufactured by Kodak. Instead of days or weeks, it now took less than 90 seconds to see your picture.

For the next 30 years Polaroid made evolutionary better Instant Cameras. In 1963 Polacolor Instant color film was introduced. In 1973 the Polaroid SX-70 Land Camera was introduced with a new type of instant film that no longer had to be peeled apart.

A Secret Grudge Match

To understand why Kodak tried to put Polaroid out of business you need to know some of most classified secrets of the Cold War.

Project GENETRIX and The U-2 – Balloon and Airplane Reconnaissance over the Soviet Union
During the Cold War with the Soviet Union the U.S. intelligence community was desperate for intelligence. In the early 1950s the U.S. sent unmanned reconnaissance balloons over the Soviet Union.

Next, from 1956-1960 the CIA flew the Lockheed U-2 spy plane over the Soviet Union on 24 missions, taking photos of its military installations. (The U-2 program was kicked off by a 1954 memo from Edwin Land (Polaroid CEO) to the director of the CIA.)

The U-2 cameras used Kodak film, processed in a secret Kodak lab codenamed Bridgehead.  In May 1960 a U-2 was shot down inside Soviet territory and the U.S. stopped aircraft overflights of the Soviet Union. But luckily in 1956 the U.S. intelligence community had concluded that the future of gathering intelligence over the Soviet Union would be with spy satellites orbiting in space.

Air Force – SAMOS –  1st Generation Photo Reconnaissance Satellites
By the late 1950s the Department of Defense decided that the future of photo reconnaissance satellites would be via an Air Force program codenamed SAMOS.

The first SAMOS satellites would have a camera that would take pictures and develop them while orbiting earth using special Kodak Bimat film, then scan the negative and transmit the image to a ground station. After multiple rocket failures and realization that the resolution and number of images the satellite could downlink would be woefully inadequate for the type and number of targets (it would take 3 hours to downlink the photos from a single pass), the film read-out SAMOS satellites were canceled.

Sidebar– Kodak Goes to The Moon

While the Kodak Bimat film and scanner never made it as an intelligence reconnaissance system around the earth, it did make it to the moon. NASA’s Lunar Orbiter program to map the moon got their Kodak Bimat film and scanner cameras from the defunct SAMOS program. In 1966 and ‘67 NASA successfully launched 5 Lunar Orbiters around the moon developing the film onboard and transmitting a total of 3,062pictures to earth. (The resolution of the images and the fact that it took 40 minutes to send each photo back was fine for NASA’s needs.)

CIA’s CORONA – 2nd Generation Photo Reconnaissance Satellites
It was the CIA’s CORONA film-based photo reconnaissance satellites that first succeeded in returning intelligence photos from space. Designed as a rapid cheap hack, it was intended as a stopgap until more capable systems entered service. Fairchild built the first few CORONA cameras, but ultimately Itek became the camera system supplier. CORONA sent the exposed film back to earth in reentry vehicles that were recovered in mid-air. The film was developed by Kodak at their secret Bridgehead lab and sent to intelligence analysts in the CIA’s National Photographic Interpretation Center (NPIC) who examined the film. (While orbiting 94 miles above the earth the cameras achieved 4 ½-foot resolution.) CORONA was kept in service from 1960 to 1972, completing 145 missions.

Film recovery via reentry vehicles would be the standard for the next 16 years.

SidebarThe CIA versus the National Reconnaissance Office (NRO)

With the CIA’s success with CORONA, and the failure of the Air Force original SAMOS program, the Department of Defense felt the CIA was usurping its role in Reconnaissance. In 1961 it was agreed that all satellite Reconnaissance would be coordinated by a single National Reconnaissance Office (the NRO). For 31 years satellite and spy plane reconnaissance was organized as four separate covert programs:

Program A – Air Force satellite programs: SAMOS, GAMBIT, DORIAN…
Program B – CIA satellite programs: CORONA, HEXAGON, KEENAN…
Program C – Navy satellite programs: GRAB, POPPY …
Program D – CIA/Air Force reconnaissance Aircraft: U-2, A-12/SR-71, ST/POLLY, D-21

While this setup was rational on paper, the CIA and NRO would have a decades -long political battle over who would specify, design, build and task reconnaissance satellites. The CIA’s outside expert on imaging reconnaissance satellites was… Edwin Land CEO of Polaroid.

The NRO’s existence wasn’t even acknowledged until 1992.

Air Force/NRO – GAMBIT3rd Generation Film Photo Reconnaissance Satellites
After the failure of the SAMOS on-orbit scanning system, the newly established National Reconnaissance Office (NRO) regrouped and adopted film recovery via reentry vehicles.

Prodded by the NRO and Air Force, Kodak put in an “unsolicited” proposal for a next-generation imaging satellite codenamed GAMBIT. Kodak cameras on GAMBIT had much better resolution than the Itek cameras on CORONA. In orbit 80 miles up, GAMBIT had high-resolution spotting capability – but in a narrow field of view. This complemented the CORONA broad area imaging.  GAMBIT-1 (KH-7) produced images of 2-4 feet in resolution. It flew for 38 missions from July 1963 to June 1967. The follow-on program,  GAMBIT-3 (KH-8), provided even sharper images with resolution measured in inches. GAMBiT-3 flew for 54 missions from July 1966 to August 1984. The resolution of GAMBITs photos wouldn’t be surpassed for decades.

CIA – HEXAGON4th Generation Film Photo Reconnaissance Satellites
Meanwhile the CIA decided it was going to build the next generation reconnaissance satellite after GAMBIT. Hexagon represented another technological leap forward. Unlike GAMBIT that had a narrow field of view, the CIA proposed a satellite that could photograph a 300-nautical-mile-wide by 16.8-nautical-mile-long area in a single frame. Unlike GAMBIT whose cameras were made by Kodak, HEXAGON’s cameras would be made by Perkin Elmer.

CIA Versus NRO – HEXAGON versus DORIAN
In 1969 the new Nixon administration was looking to cut spending and the intelligence budget was a big target. There were several new, very expensive programs being built: HEXAGON, the CIA’s school bus-sized film satellite; and a military space station: the NRO/Air Force Manned Orbiting Laboratory (MOL) with its DORIAN KH-10 film-based camera (made by Kodak). There was also a proposed high-resolution GAMBIT-follow-on satellite called FROG (Film Read Out GAMBIT) – again with a Kodak Bimat camera and a laser scanner.

In March 1969, President Nixon canceled the CIA’s HEXAGON satellite program in favor of the Manned Orbiting Laboratory (MOL), the Air Force space station with the Kodak DORIAN camera. It looked like Kodak had won and the CIA’s proposal lost.

However, the CIA fought back.

The next month, in April 1969, the Director of the CIA used the recommendation of CIA’s reconnaissance intelligence panel – headed by Edwin Land (Polaroid’s CEO) to get President Nixon to reverse his decision. Land’s panel argued that HEXAGON was essential to monitoring arms control treaties with the Soviet Union. Land said DORIAN would be useless because astronauts on the military space station could only photograph small amounts of territory, missing other things that could be a few miles away. In contrast, HEXAGON covered so much territory that there was simply no place for the Soviet Union to hide any forbidden bombers or missiles.

Land’s reconnaissance panel recommended: 1) canceling the manned part of the NRO/Air Force Manned Orbiting Laboratory (MOL) and 2) using the DORIAN optics in a robotic system (which was ultimately never built) and 3) urging the President to instead start “highest priority” development of a “simple, long-life imaging satellite, using an array of photosensitive elements to convert the image to electrical signals for immediate transmission.” (This would become the KH-11 KEENAN, ending the need for film-based cameras in space.)

The result was:

Over the next two years, Land lobbied against the GAMBIT follow-on called FROG and after a contentious fight effectively killed it in 1971. But most importantly Nixon gave the go-ahead to build the CIA’s KH-11 KEENAN electronic imaging satellite – dooming film-based satellites – and all of Kodak’s satellite business.

Why Did Kodak Go to War With Polaroid?

Finally we can now understand why Kodak was furious at Polaroid. The CEO of Polaroid killed Kodak’s satellite reconnaissance business.

Kodak’s 1970 annual report said, “Government sales dropped precipitously from $248 million in 1969 to $160 million in 1970, a decline of nearly 36 percent.” (That’s ¾’s of a billion dollars in today’s dollars.)

The DORIAN camera on the Manned Orbiting Laboratory and the very high-resolution GAMBIT FROG follow-on were all Kodak camera systems built in Kodak’s K-Program, a highly classified segment of the company. In April 1969 when MOL/DORIAN KH-10 was canceled, Kodak laid off 1,500 people from that division.

Kodak also had 1,400 people in a special facility that developed the film codenamed Bridgehead. With film gone from reconnaissance satellites, only small amounts were needed for U-2 flights. Another 1,000+ people ultimately would be let go.

Louis Eilers had been Kodak president since 1967 and in 1969 became CEO. He had been concerned about Land’s advocacy of the CIA’s programs that shut out Kodak of HEXAGON. But he went ballistic when he learned of the role Edwin Land played in killing the Manned Orbiting Lab (MOL) and the Kodak DORIAN KH-10 camera.

Kodak’s Revenge and Ultimate Loss
In 1963 when Polaroid launched its first color instant film — Polacolor –  Kodak manufactured Polacolor’s film negative. By 1969 Polaroid was paying Kodak $50 million a year to manufacture that film. (~$400 million in today’s dollars.) Kodak tore up that manufacturing relationship in 1969 after the MOL/DORIAN cancelation.

Kodak then went further. In 1969 they started two projects: create their own instant cameras to compete with Polaroid and create instant film for Polaroid cameras – Polaroid made their profits on selling film.

In 1976 Kodak came out with two instant cameras — the EK-4 and EK-6 –and instant film that could be used in Polaroid cameras. Polaroid immediately sued, claiming Kodak had infringed on Polaroid patents. The lawsuit went on for 9 years. Finally, in 1985 a court ruled that Kodak infringed on Polaroid patents and Kodak was forced to pull their cameras off store shelves and stop making them. Six years later, in 1991, Polaroid was awarded $925 million in damages from Kodak.

Epilogue
1976 was a landmark year for both Kodak and Polaroid. It was the beginning of their 15-year patent battle, but it was also the beginning of the end of film photography from space. That December the first digital imaging satellite, KH-11 KEENAN, went into orbit.

After Land’s forced retirement in 1982, Polaroid never introduced a completely new product again. Everything was a refinement or repackaging of what it had figured out already. By the early ’90s, the alarms were clanging away; bankruptcy came in 2001.

Kodak could never leave its roots in film and missed being a leader in digital photography. It filed for bankruptcy protection in 2012, exited legacy businesses and sold off its patents before re-emerging as a sharply smaller company in 2013.

Today, descendants of the KH-11 KENNEN continue to operate in orbit.


Read all the Secret History posts here

The Secret History of Polaroid CEO Edwin Land

The connections between the world of national security and commercial companies still has surprises.


December 1976 – Vandenberg Air Force Base, U.S. military space port on the coast of California

As a Titan IIID rocket blasted off, it carried a spacecraft on top that would change everything about how intelligence from space was gathered. Heading to space was the first digital photo reconnaissance satellite. A revolution in spying from space had just begun.

For the previous 16 years three generations of U.S. photo reconnaissance satellites (257 in total) took pictures of the Soviet Union on film, then sent the film back to earth on reentry vehicles that were recovered in mid-air. After the film was developed, intelligence analysts examined it trying to find and understand the Soviet Union’s latest missiles, aircraft, and ships. By the mid-1970s these photo reconnaissance satellites could see objects as small as a few inches from space. By then, the latest U.S. film-based reconnaissance satellite – Hexagon – was the size of a school bus and had six of these reentry vehicles that could send its film back to earth. Though state of the art for its time, the setup had a drawback: Pictures they returned might be days, weeks or even months old. That meant in a crisis – e.g. the Soviet invasion of Czechoslovakia in 1968 or the Arab-Israeli war in 1973 – photo reconnaissance satellites could not provide timely warnings and indications, revealing what an adversary was up to right now. The holy grail for overhead imaging from space was to send the pictures to intelligence analysts on the ground in near real time.

And now, finally after a decade of work by the CIA’s Science and Technology Division, the first digital photo reconnaissance satellite – the KH-11, code-named KENNEN – which could do all that, was heading to orbit. For the first time pictures from space were going to head back to the ground via bits, showing images in near real time.

The KH-11/ KENNEN project was not a better version of existing film satellites, it was an example of disruptive innovation. Today, we take for granted that billions of cell phones have digital cameras, but in the 1970s getting a computer chip to “see” was science fiction. To do so required a series of technology innovations in digital imaging sensors, and the CIA funded years of sensor research at multiple research centers and companies. That allowed them to build the KH-11 sensor (first with a silicon diode array, and then the using first linear CCD arrays), which turned the images seen by the satellites’ powerful telescope into bits.

Getting those bits to the ground no longer required reentry vehicles carrying film, but it did require the launch of a network of relay satellites (code named QUASAR (aka SDS, Satellite Data System). While the KH-11 was taking pictures over the Soviet Union, the images were passed as bits from satellite to satellite at the speed of light, then downlinked to a ground station in the U.S. New ground stations were built to handle a large, fast stream of digital data. And the photo analysts required new equipment.

More importantly, like most projects that disrupt the status quo, it required a technical visionary who understood how the pieces would create a radically new system, and a champion with immense credibility in imaging and national security who could save the project each time the incumbents tried to kill it — even convincing the President of the United States to reverse its cancelation.

More detail in a bit. But let’s fast forward, four months later, to a seemingly unrelated story…

April 1977 – Needham, MA, Polaroid Annual Meeting
Edwin Land, the 67-year-old founder/CEO/chairman and director of research of Polaroid, the company that had been shipping instant cameras for 30 years, stood on stage and launched his own holy grail – and his last hurrah – an instant film-based home-movie camera called Polavision.  At the time, you sent your home movie film out to get developed and you’d be able to view it in days or a week. Land was demoing an instant movie. You filmed a movie and 90 seconds later you could see it. It was a technical tour de force – remember this was pre-digital, so the ability to instantly develop and show a movie seemed like magic. Much like the KH-11/KEENAN it also was a complete system –  camera, instant film, and player.  It truly was the pinnacle of analog engineering.

But Polavision was a commercial disaster. Potential customers found it uncompelling and its $3,500 price (in today’s dollars) daunting. You could only record up to 2½ minutes of film. And believe it or not, with Polavision you couldn’t record sound with the movies. The 8mm film couldn’t be played back on existing 8mm projectors and could only be viewed on a special player with a 12” projection screen. There was no way to edit the film. It was a closed system. Worse, two years earlier Sony had introduced the first Betamax VCR and JVC had just introduced VHS recorders that could hold hours of video that could be edited. The video recorders looked like a better bet on the future. Polaroid discontinued Polavision two years later in 1979.

For decades Land’s unerring instincts for instant products delighted customers. However, Polavision was the second misstep for Land. In 1972 at Land’s insistence, Polaroid had prematurely announced the SX-70 camera – another technical tour de force – before it could scale manufacturing. In 1975 the board helped Land “decide” to step down as president and chief operating officer to let other execs handle manufacturing and scale.

But the biggest threat to Polaroid came in 1976, a year before the Polavision announcement, when Kodak entered Polaroid’s instant camera and film business with competitive products.

After the Polavision debacle, Land was sidelined by the board, which no longer had faith in his technical and market vision. Land gave up the title of chairman in 1980. He resigned his board seat in 1982, and in 1985, bitter he had been forced out of the company he founded, he sold all his remaining stock, cutting all ties with the company.

Steve Jobs considered Land one of his first heroes, calling him “a national treasure.” (Take a look at part of a 1970 talk by Land eerily describing something that sounds like an iPhone.)

Meanwhile, inside Polaroid Labs, work had begun on two new technologies Land had sponsored: inkjet printing and something called “filmless electronic photography.” Neither project got out the door because the new management was concerned about cannibalizing Polaroid’s film business. Instead they doubled down on selling and refining instant film. Polaroid’s first digital camera wouldn’t hit the market till 1996, by which time the battle had been lost. 

What on earth do these two stories have to do with each other?
It turns out that the person who had consulted on every one of the film-based photo reconnaissance satellites – Corona, Gambit, and Hexagon – was also the U.S. government’s most esteemed expert on imaging and spy satellites. He was the same person who championed replacing the film-based photo satellites with digital imaging. And was the visionary who pushed the CIA forward on KH-11/KEENAN. By 1977, this person knew more about the application of digital imaging then anyone on the planet.

Who was that?

It was Edwin Land, the Founder/Chairman of Polaroid – the same guy that introduced the film-based Polavision.

More in the next installment here.

Read all the Secret History posts here


Read all the Secret History posts here

Technology, Innovation, and Great Power Competition – 2023 Wrap Up

We just wrapped up the third year of our Technology, Innovation, and Great Power Competition class –part of Stanford’s Gordian Knot Center for National Security Innovation.

Joe Felter, Mike Brown and I teach the class to:

  • Give our students an appreciation of the challenges and opportunities for the United States in its enduring strategic competition with the People’s Republic of China, Russia and other rivals.
  • Offer insights on how commercial technology (AI, autonomy, cyber, quantum, semiconductors, access to space, biotech, hypersonics, and others) are radically changing how we will compete across all the elements of national power e.g. diplomatic, informational, military, economic, financial, intelligence and law enforcement (our influence and footprint on the world stage).
  • Expose students to experiential learning on policy questions. Students formed teams, got out of the classroom and talked to the stakeholders and developed policy recommendations.

Why This Class?
The recognition that the United States is engaged in long-term strategic competition with the Peoples Republic of China and Russia became a centerpiece of the 2017 National Security Strategy and 2018 National Defense Strategy. The 2021 interim National Security Guidance and the administration’s recently released 2022 National Security Strategy make clear that China has rapidly become more assertive and is the only competitor potentially capable of combining its economic, diplomatic, military, and technological power to mount a sustained challenge to a stable and open international system. And as we’ve seen in Ukraine, Russia remains determined to wage a brutal war to play a disruptive role on the world stage.

Prevailing in this competition will require more than merely acquiring the fruits of this technological revolution; it will require a paradigm shift in the thinking of how this technology can be rapidly integrated into new capabilities and platforms to drive new operational and organizational concepts and strategies that change and optimize the way we compete.

Class Organization
The readings, lectures, and guest speakers explored how emerging commercial technologies pose challenges and create opportunities for the United States in its strategic competition with great power rivals with an emphasis on the People’s Republic of China. We focused on the challenges created when U.S. government agencies, our federal research labs, and government contractors no longer have exclusive access to these advanced technologies.

This course included all that you would expect from a Stanford graduate-level class in the Masters in International Policy – comprehensive readings, guest lectures from current and former senior officials/experts, and written papers. What makes the class unique however, is that this is an experiential policy class. Students formed small teams and embarked on a quarter-long project that got them out of the classroom to:

  • identify a priority national security challenge, and then …
  • validate the problem and propose a detailed solution tested against actual stakeholders in the technology and national security ecosystem.

The class was split into three parts.

Part 1, weeks 1 through 4 covered the international relations theories that attempt to explain the dynamics of interstate competition between powerful states, U.S. national security and national defense strategies and policies guiding our approach to Great Power Competition specifically focused on the People’s Republic of China (PRC) and the Chinese Communist Party (CCP).

In between parts 1 and 2 of the class, the students had a midterm individual project. It required them to write a 2,000-word policy memo describing how a U.S. competitor is using a specific technology to counter U.S. interests and a proposal for how the U.S. should respond.

Part 2, weeks 5 through 8, dove into the commercial technologies: semiconductors, space, cyber, AI and Machine Learning, High Performance Computing, and Biotech. Each week the students had to read 5-10 articles (see class readings here.) And each week we had guest speakers on great power competition, and technology and its impact on national power and lectures/class discussion.

Guest Speakers
In addition to the teaching team, the course drew on the experience and expertise of guest lecturers from industry and from across U.S. Government agencies to provide context and perspective on commercial technologies and national security.

The students were privileged to hear from extraordinary  guest speakers with significant experience and credibility on a range of topics related to the course objectives. Highlights of this year’s speakers include:

On National Security and American exceptionalism: General Jim Mattis, US Marine Corps (Ret.), former Secretary of Defense.

On China’s activities and efforts to compete with the U.S.: Matt Pottinger – former Deputy National Security Advisor, Elizabeth Economy – leading China scholar and former Dept of Commerce Senior Advisor for China, Tai Ming Cheung, – Author of Innovate to Dominate: The Rise of the Chinese Techno-Security State.

On U.S. – China Policy: Congressman Mike Gallagher, Chair House Select Committe on China.

On Innovation and National Security: Chris Brose – Author of The Kill Chain, Doug Beck – Director of the Defense Innovation Unit, Anja Manuel – Executive Director of the Aspen Strategy and Security Forum.

For Biotech: Ben Kirukup – senior biologist US Navy, Ed You – FBI Special Agent Biological Countermeasures Unit, Deborah Rosenblum – Asst Sec of Defense for Nuclear, Chemical, and Biological Defense Programs, Joe DeSimone – Professor Chemical Engineering.

For AI: Jared Dunnmon – Technical Director for AI at the Defense Innovation Unit, Lt. Gen. (Ret) Jack Shanahan – Director, Joint Artificial Intelligence Center, Anshu Roy-  CEO Rhombus AI

For Cyber: Anne Neuberger – deputy national security advisor for cyber

For Semiconductors: Larry Diamond – Senior Fellow at the Hoover Institution

Significantly, the students were able to hear the Chinese perspective on U.S. – China competition from Dr. Jia Qingguo – Member of the Standing Committee of the Central Committee of China.

The class closed with a stirring talk and call to action by former National Security Advisor LTG ret H.R. McMaster.

In the weeks in-between we had teaching team lectures followed by speakers that led discussions on the critical commercial technologies.

Team-based Experiential Project
The third part of the class was unique – a quarter-long, team-based project. Students formed teams of 4-6 and selected a national security challenge facing an organization or agency within the U.S. Government. They developed hypotheses of how commercial technologies can be used in new and creative ways to help the U.S. wield its instruments of national power. And consistent with all our Gordian Knot Center classes, they got out of the classroom. and interviewed 20+ beneficiaries, policy makers, and other key stakeholders testing their hypotheses and proposed solutions.

Hacking For Policy – Final Presentations:
At the end of the quarter, each student teams’ policy recommendations were summarized in a 10-minute presentation. The presentation was the story of the team’s learning journey, describing where they started, where they ended, and the key inflection points in their understanding of the problem. (A written 3000 word report followed focusing on their recommendations for addressing their chosen security challenge and describing how their solutions can be implemented with speed and urgency.)

By the end of the class all the teams realized that the policy problem they had selected had morphed into something bigger, deeper, and much more interesting.

Their policy presentations are below.

The class is as exhausting to teach as it to take. We have an awesome set of teaching assistants.

Team 1: Precision Match (AI for DoD Operations)

Click here to see the presentation.

What makes teaching worthwhile is the feedback we get from our students:

TIGPC has been the best class I’ve taken at Stanford and has caused me to do some reflection in what I want to do after my time at Stanford. I’m only a sophomore but doing such a deep dive into energy and (as Steve says) getting out of the building, I’m starting to seriously consider a career in clean energy security post graduation.

Team 2: Outbound Investment to China

Click here to see the presentation.

Team 3: Open-Source AI

Click here to see a summary of the presentation.

Team 4: AlphaChem

Click here to see the presentation.

One of my takeaways from the class is that you can be the smartest person in the room, but you will never have as much knowledge as everyone else combined so go talk to people, it will make you far smarter

Team 5: South China Sea

Click here to see the presentation.

Awesome class! … incredible in bringing prestigious guest speakers into the class and having engaging discussions. My background was not in national security and this class really offered an important perspective on the opportunities for technology innovation to impact and help with national security.

Team 6: Chinese Real Estate Investment in the U.S.

Click here to see the presentation.

Team 7: Public Private Partnerships

Click here to see the presentation.

Just wanted to let you know that, as a Senior, this is one of the best classes I’ve taken across my 4 years at Stanford.

Team 8: Ukraine Aid

Click here to see the presentation.

Lessons Learned

  • We combined lecture and experiential learning so our students can act on problems not just admire them
    • The external input the students received was a force multiplier
    • It made the lecture material real, tangible and actionable
    • Lean problem solving methods can be effectively employed to address pressing national security and policy challenges
    • This course was akin to a “Hacking for Policy class” and can be tweaked and replicated going forward.
  • The class created opportunities for our best and brightest to engage and address challenges at the nexus of technology, innovation and national security
    • When students are provided such opportunities they aggressively seize them with impressive results
    • The final presentations and papers from the class are proof that will happen
  • Pushing students past what they think is reasonable results in extraordinary output. Most rise way above the occasion

The Department of Defense Is Getting Its Innovation Act Together – But More Can Be Done

This post previously appeared in Defense News  and C4SIR.

Despite the clear and present danger of threats from China and elsewhere, there’s no agreement on what types of adversaries we’ll face; how we’ll fight, organize, and train; and what weapons or systems we’ll need for future fights. Instead, developing a new doctrine to deal with these new issues is fraught with disagreements, differing objectives, and incumbents who defend the status quo. Yet change in military doctrine is coming. Deputy Defense Secretary Kathleen Hicks is navigating the tightrope of competing interests to make it happen – hopefully in time.

From left, Skydio CEO Adam Bry demonstrates the company’s autonomous systems technology for Deputy Defense Secretary Kathleen Hicks and Doug Beck, director of the Defense Innovation Unit, during a visit to the company’s facility in San Mateo, Calif. (Petty Officer 1st Class Alexander Kubitza/U.S. Navy)


There are several theories of how innovation in military doctrine and new operational concepts occur. Some argue new doctrine emerges when civilians intervene to assist military “mavericks,” e.g., the Goldwater-Nichols Act. Or a military service can generate innovation internally when senior military officers recognize the doctrinal and operational implications of new capabilities, e.g., Rickover and the Nuclear Navy.

But today, innovation in doctrine and concepts is driven by four major external upheavals that simultaneously threaten our military and economic advantage:

  1. China delivering multiple asymmetric offset strategies.
  2. China fielding naval, space and air assets in unprecedented numbers.
  3. The proven value of a massive number of attritable uncrewed systems on the Ukrainian battlefield.
  4. Rapid technological change in artificial intelligence, autonomy, cyber, space, biotechnology, semiconductors, hypersonics, etc, with many driven by commercial companies in the U.S. and China.

The Need for Change
The U.S. Department of Defense traditional sources of innovation (primes, FFRDCs, service labs) are no longer sufficient by themselves to keep pace.

The speed, depth and breadth of these disruptive changes happen faster than the responsiveness and agility of our current acquisition systems and defense-industrial base. However, in the decade since these external threats emerged, the DoD’s doctrine, organization, culture, process, and tolerance for risk mostly operated as though nothing substantial needed to change.

The result is that the DoD has world-class people and organizations for a world that no longer exists.

It isn’t that the DoD doesn’t know how to innovate on the battlefield. In Iraq and Afghanistan innovative crisis-driven organizations appeared, such as the Joint Improvised-Threat Defeat Agency and the Army’s Rapid Equipping Force. And armed services have bypassed their own bureaucracy by creating rapid capabilities offices. Even today, the Security Assistance Group-Ukraine rapidly delivers weapons.

Unfortunately, these efforts are siloed and ephemeral, disappearing when the immediate crisis is over. They rarely make permanent change at the DoD.

Bu in the past year several signs of meaningful change show that the DoD is serious about changing how it operates and radically overhauling its doctrine, concepts, and weapons.

First, the Defense Innovation Unit was elevated to report to the of defense secretary. Previously hobbled with a $35 million budget and buried inside the research and engineering organization, its budget and reporting structure were signs of how little the DoD viewed the importance of commercial innovation.

Now, with DIU rescued from obscurity, its new director Doug Beck chairs the Deputy’s Innovation Steering Group, which oversees defense efforts to rapidly field high-tech capabilities to address urgent operational problems. DIU also put staff in the Navy and U.S. Indo-Pacific Command to discover actual urgent needs.

Furthermore, the House Appropriations Committee signaled the importance of DIU with a proposed a fiscal 2024 budget of $1 billion to fund these efforts. And the Navy has signaled, through the creation of the Disruptive Capabilities Office, that it intends to fully participate with DIU.

In addition, Deputy Defense Secretary Hicks unveiled the Replicator initiative, meant to deploy thousands of attritable autonomous systems (i.e. drones – in the air, water and undersea) within the next 18 to 24 months. The initiative is the first test of the Deputy’s Innovation Steering Group’s ability to deliver autonomous systems to warfighters at speed and scale while breaking down organizational barriers. DIU will work with new companies to address anti-access/area denial problems.

Replicator is a harbinger of fundamental DoD doctrinal changes as well as a solid signal to the defense-industrial base that the DoD is serious about procuring components faster, cheaper and with a shorter shelf life.

Finally, at the recent Reagan National Defense Forum, the world felt like it turned upside down. Defense Secretary Lloyd Austin talked about DIU in his keynote address and came to Reagan immediately following a visit to its headquarters in Silicon Valley, where he met with innovative companies. On many panels, high-ranking officers and senior defense officials used the words “disruption,” “innovation,” “speed” and “urgency” so many times, signaling they really meant it and wanted it.

In the audience were a plethora of venture and private capital fund leaders looking for ways to build companies that would deliver innovative capabilities with speed.

Conspicuously, unlike in previous years, sponsor banners at the conference were not the incumbent prime contractors but rather insurgents – new potential primes like Palantir and Anduril. The DoD has woken up. It has realized new and escalating threats require rapid change, or we may not prevail in the next conflict.

Change is hard, especially in military doctrine. (Ask the Marines.) Incumbent suppliers don’t go quietly into the night, and new suppliers almost always underestimate the difficulty and complexity of a task. Existing organizations defend their budget, headcount, and authority. Organization saboteurs resist change. But adversaries don’t wait for our decades-out plans.

But More Can Be Done

  • Congress and the military services can support change by fully funding the Replicator initiative and the Defense Innovation Unit.
  • The services have no procurement budget for Replicator, and they’ll have to shift existing funds to unmanned and AI programs.
  • The DoD should turn its new innovation process into actual, substantive orders for new companies.
  • And other combatant commands should follow what INDOPACOM is doing.
  • In addition, defense primes should more often aggressively partner with startups.

Change is in the air. Deputy Defense Secretary Hicks is building a coalition of the willing to get it done.

Here’s to hoping it happens in time.

The Secret History of Minnesota: Engineering Research Associates

This post is the latest in the “Secret History Series.” They’ll make much more sense if you watch the video or read some of the earlier posts for context. See the Secret History bibliography for sources and supplemental reading.


No Knowledge of Computers

Silicon Valley emerged from work in World War II led by Stanford professor Fred Terman developing microwave and electronics for Electronic Warfare systems. In the 1950’s and 1960’s, spurred on by Terman, Silicon Valley was selling microwave components and systems to the Defense Department, and the first fledging chip companies (Shockley, Fairchild, National, Rheem, Signetics…) were in their infancy. But there were no computer companies. Silicon Valley wouldn’t have a computer company until 1966 when Hewlett Packard shipped the HP 2116 minicomputer.

Meanwhile the biggest and fastest scientific computer companies were in Minnesota. And by 1966 they had been delivering computers for 16 years.

Minneapolis/St. Paul area companies ERA, Control Data and Cray would dominate the world of scientific computing and be an innovation cluster for computing until the mid-1980s. And then they were gone.

Why?

Just as Silicon Valley’s roots can be traced to innovation in World War II so can Minneapolis/St. Paul’s. The story starts with a company you probably never heard of – Engineering Research Associates.

It Started With Code Breaking
For thousands of years, every nation has tried to keep its diplomatic and military communications secret. They do that by encrypting (protecting the information by using a cipher/code) to scramble the messages. Other nations try to read those messages by attempting to break those codes.

During the 1930s the U.S. Army and Navy each had their own small code breaking groups. The Navy’s was called CSAW (Communications Supplemental Activity Washington) also known as OPS-20-G. The Army codebreaking group was the Signal Intelligence Service (SIS) at Arlington Hall.

The Army focused on decrypting (breaking/decoding) Japan’s diplomatic and Army codes while the Navy worked on breaking Japan’s Naval codes. This was not a harmonious arrangement. The competition between the Army and Navy code breaking groups was so contentious that in 1940 they agreed that the Army would decode and translate Japanese diplomatic code on the even days of the month and the Navy would decode and translate the messages on the odd days of the month. This arrangement lasted until Dec. 7, 1941.

At the start of WWII the Army and Navy code breaking groups each had few hundred people mainly focused on breaking Japanese codes. By the end of WWII, with the U.S. now fighting Germany, and the Soviet Union looming as a potential adversary U.S. code breaking would grow to 20,000 people working on breaking the codes of Germany, Japan and the Soviet Union.

The two groups would merge in 1949 as the Armed Forces Security Agency and then become the National Security Agency (NSA) in 1952.

The Rise of the Machines in Cryptography
Prior to 1932 practically all code breaking by the Army and Navy was done by hand. That year they began using commercial mechanical accounting equipment – the IBM keypunch, card sorters, reproducers and tabulators. The Army and Navy each had their own approach to automating cryptography. The Navy had a Rapid Analytical Machines project with hopes to build machines to integrate optics, microfilm and electronics into cryptanalytic tools. (Vannevar Bush at MIT was trying to build one for the Navy.) As WWII loomed, the advanced Rapid Machines projects were put on hold, and the Army and Navy used hundreds of specially modified commercial IBM electromechanical systems to decrypt codes.

Read the sidebars for more detailed information

Electromechanical Cryptologic Systems in WWII

By the spring 1941, the Army built the first special-purpose cryptologic attachment to the IBM punched card equipment – the GeeWhizzer using relays and rotary switches to help break the Japanese diplomatic codes. That same year, the Navy received the first in a series of 13 electro-mechanical IBM Navy Change Machines to automate decrypting cipher systems used by the Japanese Navy. The Navy attachments were extensive modifications of IBM’s standard card sorters, reproducers and tabulators. Some could be manually reconfigured via plugboards to do different tasks.

During the war the Army and Navy built ~75 of these electro-mechanical and optical systems. Some were standalone units the size of a room.

However, the bulk of the cryptoanalysis was done with IBM punch cards, sorters and tabulators, along with special microfilm comparators from Eastman Kodak. By the end of the War the Army and Navy had 750 IBM machines using several million punch cards every day.

IBM’s other mechanical contribution to cryptanalysts was the Letterwriter, (codenamed CXCO) a desktop machine that tied together electric typewriters to teletype, automatic tape and card punches, microfilm and eventually to film-processing machines. By adding plug-boards they could automate some analysis steps. Hundreds of these were bought.

The Navy’s most advanced cryptographic machine work in WWII was building 125 U.S. versions of the British code breaking machine called the BOMBE. These electromechanical BOMBES were used to crack the ENIGMA, the cipher machine used by the Germans.

Designed by the Navy’s OPS-20-G team and built at National Cash Register (NCR) in Dayton, this same Computing Machine Lab would build ~25 other types of electromechanical and optical machines, some the size of a room with 3,500 tubes, to assist in breaking Japanese and German codes. By the end of the war the Naval Computing Machine Lab was arguably building the most sophisticated electronic machines in the U.S. However, none of these machines were computers. They had no memory, and both were “‘hard-wired” to perform just one task.

(Meanwhile in England the British code breaking group in Bletchley Park built Colossus, arguably the first digital computer. At the end of the War the British offered the Navy OPS-20-G code breaking group a Colossus but the Navy turned it down.)

Dual-Use Technology
As the war was winding down, the leadership of the Navy Computing Machine Lab in OPS-20-G was thinking about how they could permanently link commercial, academic and military computing science and innovation to the Navy. After discovering that no commercial company was willing to continue their wartime work of building the specialized hardware for codebreaking, the Navy realized they needed a new company. The decided that the best way to do that was to encourage a private for-profit company to spin out and build advanced crypto-computing systems.

The Secretary of the Navy gave his OK and three officers in the Navy’s code breaking group (Commander Howard Engstrom, who had been a math professor at Yale; Lieutenant Commander William “Bill” Norris, an electrical engineer; and their contracting officer Captain Ralph Meader,) agreed to start a civilian company to continue building specialized systems to help break codes. While unique for the time, this public-private partnership was in-line with the wartime experiment of Vannevar Bush’s OSRD – using civilians in universities to develop military weapons.

Why Minneapolis/St. Paul?
While it seemed like a good idea and had the Navy’s backing, the founders got turned down for funding by companies, investment bankers and everyone, until they talked to John Parker.

Serendipity came to Minneapolis-St. Paul when the Navy team met John Parker. Parker was a ex Naval Academy graduate and a Minneapolis businessman who owned a glider manufacturing company and was well connected in Washington. Parker agreed to invest. In January 1946, they founded Engineering Research Associates (ERA). Parker became President, and got 50% of the company’s equity for a $20,000 investment (equal to $315K today) and guaranteed a $200,000 line of credit (equal to $3M today). The professional staff owned the other 50%. The new company moved into Parker’s glider hanger. Norris became the VP of Engineering, Engstrom the VP of Research, and Meader VP of Manufacturing.

The company hit the ground running. 41 of the best and brightest ex-Navy technical team members of the Naval Computing Machine Lab in Dayton moved and became the initial technical staff of ERA. When the Navy added their own staff from the Dayton Laboratory the ERA facility was designated a Naval Reserve Base and armed guards were posted at the entrance. The company took on any engineering work that came their way but were kept in business developing new code-breaking machines for the Navy. Most of the machines were custom-built to crack a specific code, and increasingly used a new ERA invention – the magnetic drum memory to process and analyze the coded texts.

ERA’s headcount grew rapidly. Within a year the company had 145 people. A year later, 420. And by 1949, 652 employees and by 1955, 1400.  Sales in their first fiscal year were $1.5 million ($22 million in today’s dollars).

During World War II the demands of war industries caused millions more Americans to move to where most defense plants located. Post-war era Americans were equally mobile, willing to move where the opportunities were. And if you were an engineer who wanted to work on the cutting edge of electronics, and electromechanical systems, ERA in Minneapolis-St. Paul was the place to be. (Applicants were told that ERA was doing electronics work for government and industry. Those who wanted more detail were given a number of cover stories. Many were told that ERA was working on airline seat reservation systems.)

How Did ERA Grow So Quickly?
The Navy thought of ERA as its “captive corporation.” From the first day ERA started with contracts from the Navy OPS-20-G codebreaking group. ERA built the most advanced electronic systems of the time. Unfortunately for the company they couldn’t tell anyone as their customer was the most secret government agency in the country – the National Security Agency.

ERAs systems were designed to solve problems defined by their Navy code-breaking customer. They fell into two categories: some projects were designed to automate existing workflows of decoding known ciphers; others were used to discover breaks into new ciphers. And with the start of the Cold War, that meant Soviet cryptosystems. ERAs cryptanalytic devices were most often designed to break only one particular foreign cipher machine (which kept a stream of new contracts coming.) The specific purpose and target of each of these systems with colorful codenames are still classified.

What Did ERA Build For the National Security Agency (NSA)?

By the end of ERA’s first year, ERA had contracts for a digital device called Alcatraz which used thousands of vacuum tubes and relays. A contract for a system named O’Malley followed. Then two “exhaustive trial” systems called Hecate for $250,000 ($3.2 million in today’s dollars) and the follow-on system, Warlock ($500,000 – $6.4 million today.) Warlock was so large that it was kept at the ERA factory and operated as a remote operations center.

Next were the Robin machines, a photoelectric comparator, used to attack the Soviet Albatross code. The first two were delivered in the end of 1950. Thirteen more were delivered to NSA over the next two years.

ERA Disk Drives
One of the problems code breakers had was the difficulty of being able to store and operate on large sets of data. To do so, cryptanalysts used thousands of punched cards, miles of paper tapes and microfilm. ERA was the pioneer in the development of an early form of disk drives called magnetic drum memories.

ERA used these magnetic drums in the special systems they built for NSA and later in their Atlas computers. They also sold them as peripherals to other computer companies.

Goldberg, which followed, was another room-sized special purpose machine – a comparator with statistical capabilities – that took photoelectric sensing and paper tape scanning to new heights.

Costing $250,000 ($3.2 million in today’s dollars), it had 7,000 tubes and was one of the first Agency machines to use a magnetic drum to store and handle data.

Another similarly sized system, Demon, followed. It was a dictionary machine designed to crack a Soviet code. It also used 34-inch-diameter magnetic drum to perform a specialized version of table lookup. Three of these large systems were delivered.

ERA engineers operated at the same relentless and exhausting pace as they had done in war time – similar to how Silicon Valley silicon and computer companies would operate three decades later.

For the next decade ERA would continue to deliver a stream of special-purpose code breaking electronic systems and subsystems for the Navy cryptologic community. (These NSA documents give a hint at the number and variety of encryption and decryption equipment at NSA in the early 1950’s: here, here, here, here, and here.)

ERA was undercapitalized and always looking for other products to sell. At the same time ERA was building systems for the NSA they pursued other lines of businesses; research studies on liquid fueled rockets, aircraft antenna couplers (which turned into a profitable product line,) a Doppler Miss Distance Indicator, Ground Support Equipment (GSE) for airlines, and Project Boom to produce instrumentation for what would become  underground nuclear tests. A 1950 study for the Office of Naval Research called High-Speed Computing Devices – a survey of all computers then existent in the U.S. As there was no single source of information about what was happening in the rapidly growing computer field, this ERA report became the bible of early U.S. computers.

The Holy Grail – A Digital Computer for Cryptography?
As complicated as the ERA machines were, they were still single function machines, not general purpose computers. But up until 1946 no one had built a general purpose computer.

With the war over what the Navy OP-20-G’s and Army SIS computing wizards really wanted was to create a single machine that could perform all the major cryptanalytic functions. The most important of the crypto techniques were based upon either locating repeated patterns, tallying massive numbers of letter patterns, and recognizing plain text, or performing some form of “exhaustive searching.”

How the NSA Got Their First Computers

Their idea was to put each of these major cryptanalytic functions in separate, dedicated, single-function hardware boxes and connect them through a central switching mechanism. That would allow cryptanalysts to tie them together in any configuration; and hook it all to free-standing input/output mechanisms. With a stock of these specialized boxes the agencies believed they could create any desired cryptanalytic engine.

Just as the consensus for this type of architecture was coalescing, a new idea emerged in 1946 – the concept of a general purpose digital computer with a von Neumann architecture. In contrast to having many separate hardwired functions, a general purpose computer would have just the four basic arithmetic ones (add, subtract, multiple and divide) along with a few that allowed movement of data between the input-output components, memory, and a single central processor. In theory, one piece of hardware could be made to imitate any machine through an inexpensive and easily changed set of instructions.

Opponents to the project believed that a von Neumann design would always be too slow because it had only a single processor to do everything. (This debate between dedicated special purpose hardware versus general purpose computers continues to this day.)

The tipping point in this debate happened in 1946 when an OPS-20-G engineer went to the Moore School’s 1946 summer course on computers. The Moore School’s computer group had just completed the ENIAC, arguably the first programmable digital computer, and they were beginning to sketch the outlines of their own new computer, the UNIVAC the first computer for business applications. The engineer came back to the Navy computing group an advocate for building a general-purpose digital computer for codebreaking having convinced himself that most cryptanalysis could be performed through digital methods. He prepared a report to show that his device would be useful to everyone at OP-20-G. The report remained Top Secret for decades.

The report detailed how a general-purpose machine could have successfully attacked the Japanese Purple codes as well as German Enigma, and Fish systems, and how it would be usefully against the current Soviet and Hagelin systems.

This changed everything for the NSA. They were now in the computer business.

ERA’s ATLAS
In 1948 the Navy gave ERA the contract to produce its first digital computer called ATLAS to be used by OPS-20-G for codebreaking.

Twenty four months later, ERA delivered the first of two 24-bit ATLAS I computers. The Atlas was 45’ wide and 9’ long. It weighed 16,000 pounds and was water cooled. Each ATLAS I cost the NSA $1.3 million ($16 million in today’s dollars).

In hindsight, the NSA crossed the Rubicon when the ATLAS I arrived. Today, an intelligence agency without computers is unimaginable. Its purchase showed incredible foresight and initiated a new era of cryptanalysis at the NSA. It was one of the handful of general purpose, binary computers anywhere. Ten years later the NSA would have 53 computers.

ERA asked the NSA for permission to offer the computer for commercial sale. The NSA required ERA to remove instructions that made the computer efficient for cryptography, and that became the commercial version – the ERA 1101 announced in December 1951. It had no operating or programming manual and its input/output facilities was a typewriter, a paper tape reader, and a paper tape punch. At the time, no programming languages existed.

ERA had delivered a breakthrough computer without having an understanding of its potential application or what a customer might have to do to use the machine. In search of commercial customers, ERA set up a ERA 1101 computer in Washington and offered it to companies as a remote computing center. As far as the commercial world knew ERA was a startup with no real computing expertise and this was their first offering. In addition, the only people with experience in writing applications for the 1101 were hidden away at NSA, and ERA was unable to staff the Arlington office to create programs for customers. Finally, ERA’s penchant for extreme secrecy left them unschooled in the art of marketing, sales, and Public Relations. When they couldn’t find any customers they donated the ERA 1101 to Georgia Tech.

With their hands on their first ever general purpose digital computer, the Navy and ERA rapidly learned what needed to be improved. ERA’s follow-on computer, the ATLAS II was a 32-bit system with additional instruction extensions for cryptography. Two were delivered to NSA between 1953 and 1954. ATLAS II cost the NSA $2.3 million ($35 million today.)

Late in 1952, a year before the ATLAS II was delivered to the NSA, ERA told Remington Rand (who now owned the company) the ATLAS II computer existed (and the government had paid for its R&D costs) and it was competitive with the newly announced IBM 701. When the ATLAS II was delivered to the NSA in 1953 they again asked for permission to sell it commercially (and again had to remove some instructions) which turned the Atlas II into the commercial ERA/Univac 1103. (see its 1956 reference manual here.)

This time with Remington Rand’s experience in sales and marketing, the computer was a commercial success with about twenty 1103s sold.

ERA’s Bogart
In 1953, with the ATLAS computers in hand, the Navy realized that a smaller digital computer could be used for data conversion and editing, and to “clean up” raw data for input to larger computers. This was the Bogart.

Physically Bogart was a “small, compact” (compared to the ATLAS) computer that weighed 3,000 pounds and covered 20 square feet of floor space. To get a feel of how insanely difficult it was to program a 1950’s computer take a look at the 1957 Bogart programming manual here.) The Bogart design team was headed by Seymour Cray. ERA delivered five Bogart machines to NSA.

Seymour Cray would reuse features of the Bogart logic design when he designed the Navy Tactical Data System computers, the UNIVAC 490 and the Control Data Corporation’s CDC 1604 and CDC 160.

By 1953, 40% of the University of Minnesota electrical engineering graduates – including Cray –  were working for ERA.

The End of an ERA
By 1952, the mainframe computer industry was beginning to take shape with office machine and electronics companies such as Remington Rand, Burroughs, National Cash Register, Raytheon, RCA and IBM. Parker, still the CEO, realized that the frantic chase of government contracts was unsustainable. (The relationship with the NSA’s procurement offices now run by Army staff, had become so strained that the Navy Computing Lab was unable to get an official letter of thanks sent to ERA for having developed the ATLAS.)

Parker calculated that ERA needed $5 million to $10 million ($75 to $150 million in today’s dollars) to grow and compete with the existing companies in the commercial computing market. Even after the NSA took over the cryptologic work of OPS-20-G the formal contracts with ERA were done through the Navy’s Bureau of Ships. NSA was known as No Such Agency and on paper its relationship with ERA didn’t exist. As far as the public knew, ERA’s products were for “the Navy.” Given that ERA’s extraordinary technical work was unknown to anyone other than the NSA, Parker didn’t think he could raise the money via a public offering (venture capital as we know it didn’t exist.)

Instead, in 1952, Parker sold ERA to Remington Rand (best known for producing typewriters) for $1.7M (about $12M in today’s dollars.) A year earlier, Remington Rand had bought Eckert-Mauchly – one of the first U.S. commercial computer companies – and its line of UNIVAC computers. They wanted ERA to get its government customers. ERA remained a standalone division. The ERA 1101 and 1103 became a part of the UNIVAC product line.

Parker became head of sales of the merged computer division. He left in 1956 and years later he became chairman of the Teleregister Corporation, the predecessor to Bunker-Ramo. He went on to become a director of several companies, including Northwest Airlines and Martin Marietta.

Remington Rand itself would be acquired by Sperry in 1955 and both ERA and Eckert–Mauchly were folded into a computer division called Sperry-UNIVAC. Much of ERA’s work was dropped, while their drum technology was used in newer UNIVAC machines. In 1986 Sperry merged with Burroughs to form Unisys.

Epilogue
For the next 60 years the NSA would have the largest collection of commercial computers and computing horsepower in the world. They would continue to supplement those with dedicated special purpose hardware.

The reorganization of American Signals Intelligence, leading to the creation of the Armed Forces Signals Agency (AFSA) in 1949, then the NSA in 1952, contributed to the demise of the special relationship between ERA and the code- breakers. The integration of the Army and Navy brought a shift in who made decisions about computer purchasing. NSA inherited a computer staff from the Army side of technical SIGINT. They had different ties and orientations than the few remaining old Navy hands. As a result, the new core NSA group did not protest when the special group that integrated Agency and ERA work was disbanded. The 1954 termination of the Navy Computing Machine Lab in St. Paul went almost unnoticed.

But the era of Minnesota’s role as a scientific computing and innovation cluster wasn’t over. In fact, it was just getting started. In 1957 ERA co-founder William Norris, and Sperry-Univac engineers Seymour Cray, Willis Drake, and ERA’s treasurer Arnold Ryden, along with a half dozen others, left Sperry-Univac and teamed up with three investors to form a new Minneapolis-based computer company: Control Data Corporation (CDC). For the next two decades Control Data would build the fastest scientific computers in the world.

Read part 18 here and all the Secret History posts here


Leaving Government for the Private Sector – Part 1

Laura Thomas is a former CIA operations officer. Reading how she moved in 2021 from CIA ops into a quantum technology company offered insightful career transition advice for those leaving her agency. Most of her lessons were applicable to any government employee venturing out to the private sector.
Below is the first of her three-part series.

—-

At least a few times a month, people looking to jump ask about my transition, which has led to me consolidating my answers below. To be up front, some of what I write will be controversial and all of it is biased. Due to length, I’ve broken it up into a three-part series.


Is it really a big jump to the private sector? It wasn’t a big jump. At the Agency, 85% of my time was spent navigating bureaucracy and equities, arguing for resources and permission for operations, and dealing with the bottom rung of employees, all while making decisions with little data or data overload. Only 15% of my time was doing the more exciting operations. Though that 15% – along with the camaraderie of some of my colleagues – made the work deeply meaningful.

Industry is similar. Human nature is human nature, and I deal with many of the same challenges and pull many of the same levers of satisfaction. The difference is my decisions now aren’t life or death.

Another large difference is the greater level of autonomy I now have. Making decisions on the fly in operations is an extreme example of autonomy, of course, but there is always a back-end overhead. Depending on company culture, decision-making can be driven dramatically down with less overhead. As an example, I can make direct recommendations to Congress with no oversight, no internal reporting requirements, and with the trust of the CEO and Board.

Do you miss it? Yes. Nothing beats the rush of bumping a target who agrees to meet with you again or landing in a foreign country for the first time. I no longer know the stories behind the headlines, and I’m not the person making those stories happen. Aside from close friends, I am now treated as an “outsider” by former colleagues.

Fortunately, I still work with smart people solving hard problems every day. And there is still meaning in what I do. Raising tens of millions of dollars from investors to advance a technology faster than the Chinese Communist Party uses the same skillset. Learning how M&A deals are structured gives me the same thrill as first learning the mechanics of a surveillance detection route. It’s the excitement of being a beginner again, but one with deep and profound experiences, which blunts the downs and enhances the ups that you will face post-Agency.

Today, I get to move our national security mission in emerging technologies farther and faster in ways that I could not in government. And while there is some level of self-justification in these statements, there is nonlinearity in industry. You can move at exponential speed.

How do you transfer your old skills to your current role? Driving decisions, organizational change, and operations in a deep tech company presents many of the same challenges and opportunities as my time in government. Leading and managing people amid uncertainty, high degrees of change, and making decisions remain my day-to-day functions. My current role as a Chief of Staff is in many ways like a DCOS (deputy chief of station) or a traditional Chief of Staff in government. I work behind the scenes, and sometimes out front, to shape our company vision, strategy and then execute, measure, and refine. (Rather than giving away bags of cash in my old job, I now ask for money from investors.)

Relationship dynamics are the same, minus the burden of extreme secrecy. All the things that most of the outside world doesn’t understand as being critical to a handler-asset relationship are just as critical to relationships in industry. Judgment remains paramount.

In the Agency I dealt with a few difficult personalities focused on empire-building and metrics rather than running sound operations. You likely will still deal with this in industry, though there are far fewer layers and entrenched interests to deal with. Knowing how to navigate various stakeholders and interests, avoid landmines, and bring people together is an extremely useful skill in industry. If you’ve been a “doer” who knows how to communicate, work, and gain buy-in across an enterprise that is geographically dispersed, as well as with and against external third parties who are frenemies (or outright hostile), this will serve you well in industry. Talk about it when you’re seeking jobs and interviewing.

Did you make any resume missteps? Most often your resume is not what will get you a job, and submitting one to a recruiter or resume bank is not the right move. Odds are your resume is almost certainly written in government-speak, and probably more terrible than you realize. It likely talks about all the jobs you held (to the degree you can share) and the dates and maybe the general locations but says nothing about what you actually accomplished or how it specifically relates to industry. You probably won’t even get beyond the AI filter.

Having a resume that says you served in country X and wrote reports that went to policymakers, and “the President,” might get you a curiosity interview, but won’t get you a job. Unless you can translate how your skills provide commercial value, you won’t get hired.

For starters, first figure out which industry you want to work in, narrow it down, and work hard to get intros at the senior levels to a handful of companies (Board of Directors member, Advisory Board member, member of the C-suite (CEO, CTO, CFO, etc), and/or investor.) You have to do a lot of networking to create your list and build your network. Find a way to meet and captivate them with a story of what you did, and how your skills can transfer this to industry and add value to their company.

An early learning point for me came as I was speaking with a prospective VC about a job. He flat-out told me he didn’t understand my value to the company. He asked point blank, “How much money did you net the U.S. Government over your career, what exactly did you do in order to get those results, and how would you bring me those same returns?”

You will get asked a question like this.

My suggestion is to say something along these lines: “It’s exponentially harder to be hired by the Agency than it is to get into Harvard, and not only was I hired based on an assessment of my judgment and the ability to operate in ambiguous situations, I then was trained to do just that, and then did it for years.

I was entrusted to create and carry out some of the most sensitive and most important missions that the U.S. Government conducts, often with little direction. Not only did I have to plan and do them, I had to do so in secret, with lives on the line, which is hard to put a price tag on.

You can give me your toughest problem, and I will figure out how to solve it in record time with buy-in from those whom you rarely get buy-in, and position you for multiple shots on goal for future opportunities because I will have your company and sector wired. I can do for you what I did for our country: evaluate opportunity, mitigate risk, and make quick and smart decisions that attack problems differently than a typical insider would. I’ll turn my salary into millions of dollars in returns or investments within two years – not singlehandedly – but in a cooperative way that leverages many parts of the company. We’ll row in unison and we’ll row in the right direction.”

How did you get your current job? I networked nonstop and ran a full targeting campaign for multiple companies to get to their CEOs. I didn’t have a resume when I was looking for jobs. I had to find senior people who had left the agency who would vouch for me.

For my current company Infleqtion, I was introduced to a former senior Intelligence Community official who previously served on a board with the CEO, who made an introduction. When we met I asked the CEO his challenges and outlined how I might be able to help. Five months later, the CEO called and said he may have a job for me and invited me to visit and speak with others in the company for their input. I received an offer shortly thereafter.

Meanwhile, three years before I left the Agency I had done a cold outreach on LinkedIn to the person I suspected was the hiring manager for a job advertisement for a company that I liked. The person told me they wanted someone with more business experience for the role, but then came calling three years later when another role opened that they thought would be a good fit. Ultimately, I met each layer up in that company including the CEO.

This all came in handy when negotiating salary, title, and function. From the many, many hours of networking hustle, I received two job offers, which happened in parallel, and I negotiated around the same title and compensation levels. Throughout the entire process, I forwarded them relevant articles and commentary on opportunities to demonstrate my value. Ultimately, I chose Infleqtion because of its mission, its people, and its reputation amid US Government circles.

Action: A) If you’re an A-player, stay in government. B) If you’re an A-player and leave, do great things on the outside and return to government service at some point.

Coming up next:

•  Part II – what are the criteria for choosing your next role, the most common types of business roles that formers go into, and how to think about big vs small company risks and current markets.

•  Part III  – title, compensation (salary + equity + bonuses) and resources you can use.

Read the rest of Laura’s blogs at https://www.lauraethomas.com/