The Department of Defense Is Getting Its Innovation Act Together – But More Can Be Done

This post previously appeared in Defense News  and C4SIR.

Despite the clear and present danger of threats from China and elsewhere, there’s no agreement on what types of adversaries we’ll face; how we’ll fight, organize, and train; and what weapons or systems we’ll need for future fights. Instead, developing a new doctrine to deal with these new issues is fraught with disagreements, differing objectives, and incumbents who defend the status quo. Yet change in military doctrine is coming. Deputy Defense Secretary Kathleen Hicks is navigating the tightrope of competing interests to make it happen – hopefully in time.

From left, Skydio CEO Adam Bry demonstrates the company’s autonomous systems technology for Deputy Defense Secretary Kathleen Hicks and Doug Beck, director of the Defense Innovation Unit, during a visit to the company’s facility in San Mateo, Calif. (Petty Officer 1st Class Alexander Kubitza/U.S. Navy)


There are several theories of how innovation in military doctrine and new operational concepts occur. Some argue new doctrine emerges when civilians intervene to assist military “mavericks,” e.g., the Goldwater-Nichols Act. Or a military service can generate innovation internally when senior military officers recognize the doctrinal and operational implications of new capabilities, e.g., Rickover and the Nuclear Navy.

But today, innovation in doctrine and concepts is driven by four major external upheavals that simultaneously threaten our military and economic advantage:

  1. China delivering multiple asymmetric offset strategies.
  2. China fielding naval, space and air assets in unprecedented numbers.
  3. The proven value of a massive number of attritable uncrewed systems on the Ukrainian battlefield.
  4. Rapid technological change in artificial intelligence, autonomy, cyber, space, biotechnology, semiconductors, hypersonics, etc, with many driven by commercial companies in the U.S. and China.

The Need for Change
The U.S. Department of Defense traditional sources of innovation (primes, FFRDCs, service labs) are no longer sufficient by themselves to keep pace.

The speed, depth and breadth of these disruptive changes happen faster than the responsiveness and agility of our current acquisition systems and defense-industrial base. However, in the decade since these external threats emerged, the DoD’s doctrine, organization, culture, process, and tolerance for risk mostly operated as though nothing substantial needed to change.

The result is that the DoD has world-class people and organizations for a world that no longer exists.

It isn’t that the DoD doesn’t know how to innovate on the battlefield. In Iraq and Afghanistan innovative crisis-driven organizations appeared, such as the Joint Improvised-Threat Defeat Agency and the Army’s Rapid Equipping Force. And armed services have bypassed their own bureaucracy by creating rapid capabilities offices. Even today, the Security Assistance Group-Ukraine rapidly delivers weapons.

Unfortunately, these efforts are siloed and ephemeral, disappearing when the immediate crisis is over. They rarely make permanent change at the DoD.

Bu in the past year several signs of meaningful change show that the DoD is serious about changing how it operates and radically overhauling its doctrine, concepts, and weapons.

First, the Defense Innovation Unit was elevated to report to the of defense secretary. Previously hobbled with a $35 million budget and buried inside the research and engineering organization, its budget and reporting structure were signs of how little the DoD viewed the importance of commercial innovation.

Now, with DIU rescued from obscurity, its new director Doug Beck chairs the Deputy’s Innovation Steering Group, which oversees defense efforts to rapidly field high-tech capabilities to address urgent operational problems. DIU also put staff in the Navy and U.S. Indo-Pacific Command to discover actual urgent needs.

Furthermore, the House Appropriations Committee signaled the importance of DIU with a proposed a fiscal 2024 budget of $1 billion to fund these efforts. And the Navy has signaled, through the creation of the Disruptive Capabilities Office, that it intends to fully participate with DIU.

In addition, Deputy Defense Secretary Hicks unveiled the Replicator initiative, meant to deploy thousands of attritable autonomous systems (i.e. drones – in the air, water and undersea) within the next 18 to 24 months. The initiative is the first test of the Deputy’s Innovation Steering Group’s ability to deliver autonomous systems to warfighters at speed and scale while breaking down organizational barriers. DIU will work with new companies to address anti-access/area denial problems.

Replicator is a harbinger of fundamental DoD doctrinal changes as well as a solid signal to the defense-industrial base that the DoD is serious about procuring components faster, cheaper and with a shorter shelf life.

Finally, at the recent Reagan National Defense Forum, the world felt like it turned upside down. Defense Secretary Lloyd Austin talked about DIU in his keynote address and came to Reagan immediately following a visit to its headquarters in Silicon Valley, where he met with innovative companies. On many panels, high-ranking officers and senior defense officials used the words “disruption,” “innovation,” “speed” and “urgency” so many times, signaling they really meant it and wanted it.

In the audience were a plethora of venture and private capital fund leaders looking for ways to build companies that would deliver innovative capabilities with speed.

Conspicuously, unlike in previous years, sponsor banners at the conference were not the incumbent prime contractors but rather insurgents – new potential primes like Palantir and Anduril. The DoD has woken up. It has realized new and escalating threats require rapid change, or we may not prevail in the next conflict.

Change is hard, especially in military doctrine. (Ask the Marines.) Incumbent suppliers don’t go quietly into the night, and new suppliers almost always underestimate the difficulty and complexity of a task. Existing organizations defend their budget, headcount, and authority. Organization saboteurs resist change. But adversaries don’t wait for our decades-out plans.

But More Can Be Done

  • Congress and the military services can support change by fully funding the Replicator initiative and the Defense Innovation Unit.
  • The services have no procurement budget for Replicator, and they’ll have to shift existing funds to unmanned and AI programs.
  • The DoD should turn its new innovation process into actual, substantive orders for new companies.
  • And other combatant commands should follow what INDOPACOM is doing.
  • In addition, defense primes should more often aggressively partner with startups.

Change is in the air. Deputy Defense Secretary Hicks is building a coalition of the willing to get it done.

Here’s to hoping it happens in time.

The Secret History of Minnesota: Engineering Research Associates

This post is the latest in the “Secret History Series.” They’ll make much more sense if you watch the video or read some of the earlier posts for context. See the Secret History bibliography for sources and supplemental reading.


No Knowledge of Computers

Silicon Valley emerged from work in World War II led by Stanford professor Fred Terman developing microwave and electronics for Electronic Warfare systems. In the 1950’s and 1960’s, spurred on by Terman, Silicon Valley was selling microwave components and systems to the Defense Department, and the first fledging chip companies (Shockley, Fairchild, National, Rheem, Signetics…) were in their infancy. But there were no computer companies. Silicon Valley wouldn’t have a computer company until 1966 when Hewlett Packard shipped the HP 2116 minicomputer.

Meanwhile the biggest and fastest scientific computer companies were in Minnesota. And by 1966 they had been delivering computers for 16 years.

Minneapolis/St. Paul area companies ERA, Control Data and Cray would dominate the world of scientific computing and be an innovation cluster for computing until the mid-1980s. And then they were gone.

Why?

Just as Silicon Valley’s roots can be traced to innovation in World War II so can Minneapolis/St. Paul’s. The story starts with a company you probably never heard of – Engineering Research Associates.

It Started With Code Breaking
For thousands of years, every nation has tried to keep its diplomatic and military communications secret. They do that by encrypting (protecting the information by using a cipher/code) to scramble the messages. Other nations try to read those messages by attempting to break those codes.

During the 1930s the U.S. Army and Navy each had their own small code breaking groups. The Navy’s was called CSAW (Communications Supplemental Activity Washington) also known as OPS-20-G. The Army codebreaking group was the Signal Intelligence Service (SIS) at Arlington Hall.

The Army focused on decrypting (breaking/decoding) Japan’s diplomatic and Army codes while the Navy worked on breaking Japan’s Naval codes. This was not a harmonious arrangement. The competition between the Army and Navy code breaking groups was so contentious that in 1940 they agreed that the Army would decode and translate Japanese diplomatic code on the even days of the month and the Navy would decode and translate the messages on the odd days of the month. This arrangement lasted until Dec. 7, 1941.

At the start of WWII the Army and Navy code breaking groups each had few hundred people mainly focused on breaking Japanese codes. By the end of WWII, with the U.S. now fighting Germany, and the Soviet Union looming as a potential adversary U.S. code breaking would grow to 20,000 people working on breaking the codes of Germany, Japan and the Soviet Union.

The two groups would merge in 1949 as the Armed Forces Security Agency and then become the National Security Agency (NSA) in 1952.

The Rise of the Machines in Cryptography
Prior to 1932 practically all code breaking by the Army and Navy was done by hand. That year they began using commercial mechanical accounting equipment – the IBM keypunch, card sorters, reproducers and tabulators. The Army and Navy each had their own approach to automating cryptography. The Navy had a Rapid Analytical Machines project with hopes to build machines to integrate optics, microfilm and electronics into cryptanalytic tools. (Vannevar Bush at MIT was trying to build one for the Navy.) As WWII loomed, the advanced Rapid Machines projects were put on hold, and the Army and Navy used hundreds of specially modified commercial IBM electromechanical systems to decrypt codes.

Read the sidebars for more detailed information

Electromechanical Cryptologic Systems in WWII

By the spring 1941, the Army built the first special-purpose cryptologic attachment to the IBM punched card equipment – the GeeWhizzer using relays and rotary switches to help break the Japanese diplomatic codes. That same year, the Navy received the first in a series of 13 electro-mechanical IBM Navy Change Machines to automate decrypting cipher systems used by the Japanese Navy. The Navy attachments were extensive modifications of IBM’s standard card sorters, reproducers and tabulators. Some could be manually reconfigured via plugboards to do different tasks.

During the war the Army and Navy built ~75 of these electro-mechanical and optical systems. Some were standalone units the size of a room.

However, the bulk of the cryptoanalysis was done with IBM punch cards, sorters and tabulators, along with special microfilm comparators from Eastman Kodak. By the end of the War the Army and Navy had 750 IBM machines using several million punch cards every day.

IBM’s other mechanical contribution to cryptanalysts was the Letterwriter, (codenamed CXCO) a desktop machine that tied together electric typewriters to teletype, automatic tape and card punches, microfilm and eventually to film-processing machines. By adding plug-boards they could automate some analysis steps. Hundreds of these were bought.

The Navy’s most advanced cryptographic machine work in WWII was building 125 U.S. versions of the British code breaking machine called the BOMBE. These electromechanical BOMBES were used to crack the ENIGMA, the cipher machine used by the Germans.

Designed by the Navy’s OPS-20-G team and built at National Cash Register (NCR) in Dayton, this same Computing Machine Lab would build ~25 other types of electromechanical and optical machines, some the size of a room with 3,500 tubes, to assist in breaking Japanese and German codes. By the end of the war the Naval Computing Machine Lab was arguably building the most sophisticated electronic machines in the U.S. However, none of these machines were computers. They had no memory, and both were “‘hard-wired” to perform just one task.

(Meanwhile in England the British code breaking group in Bletchley Park built Colossus, arguably the first digital computer. At the end of the War the British offered the Navy OPS-20-G code breaking group a Colossus but the Navy turned it down.)

Dual-Use Technology
As the war was winding down, the leadership of the Navy Computing Machine Lab in OPS-20-G was thinking about how they could permanently link commercial, academic and military computing science and innovation to the Navy. After discovering that no commercial company was willing to continue their wartime work of building the specialized hardware for codebreaking, the Navy realized they needed a new company. The decided that the best way to do that was to encourage a private for-profit company to spin out and build advanced crypto-computing systems.

The Secretary of the Navy gave his OK and three officers in the Navy’s code breaking group (Commander Howard Engstrom, who had been a math professor at Yale; Lieutenant Commander William “Bill” Norris, an electrical engineer; and their contracting officer Captain Ralph Meader,) agreed to start a civilian company to continue building specialized systems to help break codes. While unique for the time, this public-private partnership was in-line with the wartime experiment of Vannevar Bush’s OSRD – using civilians in universities to develop military weapons.

Why Minneapolis/St. Paul?
While it seemed like a good idea and had the Navy’s backing, the founders got turned down for funding by companies, investment bankers and everyone, until they talked to John Parker.

Serendipity came to Minneapolis-St. Paul when the Navy team met John Parker. Parker was a ex Naval Academy graduate and a Minneapolis businessman who owned a glider manufacturing company and was well connected in Washington. Parker agreed to invest. In January 1946, they founded Engineering Research Associates (ERA). Parker became President, and got 50% of the company’s equity for a $20,000 investment (equal to $315K today) and guaranteed a $200,000 line of credit (equal to $3M today). The professional staff owned the other 50%. The new company moved into Parker’s glider hanger. Norris became the VP of Engineering, Engstrom the VP of Research, and Meader VP of Manufacturing.

The company hit the ground running. 41 of the best and brightest ex-Navy technical team members of the Naval Computing Machine Lab in Dayton moved and became the initial technical staff of ERA. When the Navy added their own staff from the Dayton Laboratory the ERA facility was designated a Naval Reserve Base and armed guards were posted at the entrance. The company took on any engineering work that came their way but were kept in business developing new code-breaking machines for the Navy. Most of the machines were custom-built to crack a specific code, and increasingly used a new ERA invention – the magnetic drum memory to process and analyze the coded texts.

ERA’s headcount grew rapidly. Within a year the company had 145 people. A year later, 420. And by 1949, 652 employees and by 1955, 1400.  Sales in their first fiscal year were $1.5 million ($22 million in today’s dollars).

During World War II the demands of war industries caused millions more Americans to move to where most defense plants located. Post-war era Americans were equally mobile, willing to move where the opportunities were. And if you were an engineer who wanted to work on the cutting edge of electronics, and electromechanical systems, ERA in Minneapolis-St. Paul was the place to be. (Applicants were told that ERA was doing electronics work for government and industry. Those who wanted more detail were given a number of cover stories. Many were told that ERA was working on airline seat reservation systems.)

How Did ERA Grow So Quickly?
The Navy thought of ERA as its “captive corporation.” From the first day ERA started with contracts from the Navy OPS-20-G codebreaking group. ERA built the most advanced electronic systems of the time. Unfortunately for the company they couldn’t tell anyone as their customer was the most secret government agency in the country – the National Security Agency.

ERAs systems were designed to solve problems defined by their Navy code-breaking customer. They fell into two categories: some projects were designed to automate existing workflows of decoding known ciphers; others were used to discover breaks into new ciphers. And with the start of the Cold War, that meant Soviet cryptosystems. ERAs cryptanalytic devices were most often designed to break only one particular foreign cipher machine (which kept a stream of new contracts coming.) The specific purpose and target of each of these systems with colorful codenames are still classified.

What Did ERA Build For the National Security Agency (NSA)?

By the end of ERA’s first year, ERA had contracts for a digital device called Alcatraz which used thousands of vacuum tubes and relays. A contract for a system named O’Malley followed. Then two “exhaustive trial” systems called Hecate for $250,000 ($3.2 million in today’s dollars) and the follow-on system, Warlock ($500,000 – $6.4 million today.) Warlock was so large that it was kept at the ERA factory and operated as a remote operations center.

Next were the Robin machines, a photoelectric comparator, used to attack the Soviet Albatross code. The first two were delivered in the end of 1950. Thirteen more were delivered to NSA over the next two years.

ERA Disk Drives
One of the problems code breakers had was the difficulty of being able to store and operate on large sets of data. To do so, cryptanalysts used thousands of punched cards, miles of paper tapes and microfilm. ERA was the pioneer in the development of an early form of disk drives called magnetic drum memories.

ERA used these magnetic drums in the special systems they built for NSA and later in their Atlas computers. They also sold them as peripherals to other computer companies.

Goldberg, which followed, was another room-sized special purpose machine – a comparator with statistical capabilities – that took photoelectric sensing and paper tape scanning to new heights.

Costing $250,000 ($3.2 million in today’s dollars), it had 7,000 tubes and was one of the first Agency machines to use a magnetic drum to store and handle data.

Another similarly sized system, Demon, followed. It was a dictionary machine designed to crack a Soviet code. It also used 34-inch-diameter magnetic drum to perform a specialized version of table lookup. Three of these large systems were delivered.

ERA engineers operated at the same relentless and exhausting pace as they had done in war time – similar to how Silicon Valley silicon and computer companies would operate three decades later.

For the next decade ERA would continue to deliver a stream of special-purpose code breaking electronic systems and subsystems for the Navy cryptologic community. (These NSA documents give a hint at the number and variety of encryption and decryption equipment at NSA in the early 1950’s: here, here, here, here, and here.)

ERA was undercapitalized and always looking for other products to sell. At the same time ERA was building systems for the NSA they pursued other lines of businesses; research studies on liquid fueled rockets, aircraft antenna couplers (which turned into a profitable product line,) a Doppler Miss Distance Indicator, Ground Support Equipment (GSE) for airlines, and Project Boom to produce instrumentation for what would become  underground nuclear tests. A 1950 study for the Office of Naval Research called High-Speed Computing Devices – a survey of all computers then existent in the U.S. As there was no single source of information about what was happening in the rapidly growing computer field, this ERA report became the bible of early U.S. computers.

The Holy Grail – A Digital Computer for Cryptography?
As complicated as the ERA machines were, they were still single function machines, not general purpose computers. But up until 1946 no one had built a general purpose computer.

With the war over what the Navy OP-20-G’s and Army SIS computing wizards really wanted was to create a single machine that could perform all the major cryptanalytic functions. The most important of the crypto techniques were based upon either locating repeated patterns, tallying massive numbers of letter patterns, and recognizing plain text, or performing some form of “exhaustive searching.”

How the NSA Got Their First Computers

Their idea was to put each of these major cryptanalytic functions in separate, dedicated, single-function hardware boxes and connect them through a central switching mechanism. That would allow cryptanalysts to tie them together in any configuration; and hook it all to free-standing input/output mechanisms. With a stock of these specialized boxes the agencies believed they could create any desired cryptanalytic engine.

Just as the consensus for this type of architecture was coalescing, a new idea emerged in 1946 – the concept of a general purpose digital computer with a von Neumann architecture. In contrast to having many separate hardwired functions, a general purpose computer would have just the four basic arithmetic ones (add, subtract, multiple and divide) along with a few that allowed movement of data between the input-output components, memory, and a single central processor. In theory, one piece of hardware could be made to imitate any machine through an inexpensive and easily changed set of instructions.

Opponents to the project believed that a von Neumann design would always be too slow because it had only a single processor to do everything. (This debate between dedicated special purpose hardware versus general purpose computers continues to this day.)

The tipping point in this debate happened in 1946 when an OPS-20-G engineer went to the Moore School’s 1946 summer course on computers. The Moore School’s computer group had just completed the ENIAC, arguably the first programmable digital computer, and they were beginning to sketch the outlines of their own new computer, the UNIVAC the first computer for business applications. The engineer came back to the Navy computing group an advocate for building a general-purpose digital computer for codebreaking having convinced himself that most cryptanalysis could be performed through digital methods. He prepared a report to show that his device would be useful to everyone at OP-20-G. The report remained Top Secret for decades.

The report detailed how a general-purpose machine could have successfully attacked the Japanese Purple codes as well as German Enigma, and Fish systems, and how it would be usefully against the current Soviet and Hagelin systems.

This changed everything for the NSA. They were now in the computer business.

ERA’s ATLAS
In 1948 the Navy gave ERA the contract to produce its first digital computer called ATLAS to be used by OPS-20-G for codebreaking.

Twenty four months later, ERA delivered the first of two 24-bit ATLAS I computers. The Atlas was 45’ wide and 9’ long. It weighed 16,000 pounds and was water cooled. Each ATLAS I cost the NSA $1.3 million ($16 million in today’s dollars).

In hindsight, the NSA crossed the Rubicon when the ATLAS I arrived. Today, an intelligence agency without computers is unimaginable. Its purchase showed incredible foresight and initiated a new era of cryptanalysis at the NSA. It was one of the handful of general purpose, binary computers anywhere. Ten years later the NSA would have 53 computers.

ERA asked the NSA for permission to offer the computer for commercial sale. The NSA required ERA to remove instructions that made the computer efficient for cryptography, and that became the commercial version – the ERA 1101 announced in December 1951. It had no operating or programming manual and its input/output facilities was a typewriter, a paper tape reader, and a paper tape punch. At the time, no programming languages existed.

ERA had delivered a breakthrough computer without having an understanding of its potential application or what a customer might have to do to use the machine. In search of commercial customers, ERA set up a ERA 1101 computer in Washington and offered it to companies as a remote computing center. As far as the commercial world knew ERA was a startup with no real computing expertise and this was their first offering. In addition, the only people with experience in writing applications for the 1101 were hidden away at NSA, and ERA was unable to staff the Arlington office to create programs for customers. Finally, ERA’s penchant for extreme secrecy left them unschooled in the art of marketing, sales, and Public Relations. When they couldn’t find any customers they donated the ERA 1101 to Georgia Tech.

With their hands on their first ever general purpose digital computer, the Navy and ERA rapidly learned what needed to be improved. ERA’s follow-on computer, the ATLAS II was a 32-bit system with additional instruction extensions for cryptography. Two were delivered to NSA between 1953 and 1954. ATLAS II cost the NSA $2.3 million ($35 million today.)

Late in 1952, a year before the ATLAS II was delivered to the NSA, ERA told Remington Rand (who now owned the company) the ATLAS II computer existed (and the government had paid for its R&D costs) and it was competitive with the newly announced IBM 701. When the ATLAS II was delivered to the NSA in 1953 they again asked for permission to sell it commercially (and again had to remove some instructions) which turned the Atlas II into the commercial ERA/Univac 1103. (see its 1956 reference manual here.)

This time with Remington Rand’s experience in sales and marketing, the computer was a commercial success with about twenty 1103s sold.

ERA’s Bogart
In 1953, with the ATLAS computers in hand, the Navy realized that a smaller digital computer could be used for data conversion and editing, and to “clean up” raw data for input to larger computers. This was the Bogart.

Physically Bogart was a “small, compact” (compared to the ATLAS) computer that weighed 3,000 pounds and covered 20 square feet of floor space. To get a feel of how insanely difficult it was to program a 1950’s computer take a look at the 1957 Bogart programming manual here.) The Bogart design team was headed by Seymour Cray. ERA delivered five Bogart machines to NSA.

Seymour Cray would reuse features of the Bogart logic design when he designed the Navy Tactical Data System computers, the UNIVAC 490 and the Control Data Corporation’s CDC 1604 and CDC 160.

By 1953, 40% of the University of Minnesota electrical engineering graduates – including Cray –  were working for ERA.

The End of an ERA
By 1952, the mainframe computer industry was beginning to take shape with office machine and electronics companies such as Remington Rand, Burroughs, National Cash Register, Raytheon, RCA and IBM. Parker, still the CEO, realized that the frantic chase of government contracts was unsustainable. (The relationship with the NSA’s procurement offices now run by Army staff, had become so strained that the Navy Computing Lab was unable to get an official letter of thanks sent to ERA for having developed the ATLAS.)

Parker calculated that ERA needed $5 million to $10 million ($75 to $150 million in today’s dollars) to grow and compete with the existing companies in the commercial computing market. Even after the NSA took over the cryptologic work of OPS-20-G the formal contracts with ERA were done through the Navy’s Bureau of Ships. NSA was known as No Such Agency and on paper its relationship with ERA didn’t exist. As far as the public knew, ERA’s products were for “the Navy.” Given that ERA’s extraordinary technical work was unknown to anyone other than the NSA, Parker didn’t think he could raise the money via a public offering (venture capital as we know it didn’t exist.)

Instead, in 1952, Parker sold ERA to Remington Rand (best known for producing typewriters) for $1.7M (about $12M in today’s dollars.) A year earlier, Remington Rand had bought Eckert-Mauchly – one of the first U.S. commercial computer companies – and its line of UNIVAC computers. They wanted ERA to get its government customers. ERA remained a standalone division. The ERA 1101 and 1103 became a part of the UNIVAC product line.

Parker became head of sales of the merged computer division. He left in 1956 and years later he became chairman of the Teleregister Corporation, the predecessor to Bunker-Ramo. He went on to become a director of several companies, including Northwest Airlines and Martin Marietta.

Remington Rand itself would be acquired by Sperry in 1955 and both ERA and Eckert–Mauchly were folded into a computer division called Sperry-UNIVAC. Much of ERA’s work was dropped, while their drum technology was used in newer UNIVAC machines. In 1986 Sperry merged with Burroughs to form Unisys.

Epilogue
For the next 60 years the NSA would have the largest collection of commercial computers and computing horsepower in the world. They would continue to supplement those with dedicated special purpose hardware.

The reorganization of American Signals Intelligence, leading to the creation of the Armed Forces Signals Agency (AFSA) in 1949, then the NSA in 1952, contributed to the demise of the special relationship between ERA and the code- breakers. The integration of the Army and Navy brought a shift in who made decisions about computer purchasing. NSA inherited a computer staff from the Army side of technical SIGINT. They had different ties and orientations than the few remaining old Navy hands. As a result, the new core NSA group did not protest when the special group that integrated Agency and ERA work was disbanded. The 1954 termination of the Navy Computing Machine Lab in St. Paul went almost unnoticed.

But the era of Minnesota’s role as a scientific computing and innovation cluster wasn’t over. In fact, it was just getting started. In 1957 ERA co-founder William Norris, and Sperry-Univac engineers Seymour Cray, Willis Drake, and ERA’s treasurer Arnold Ryden, along with a half dozen others, left Sperry-Univac and teamed up with three investors to form a new Minneapolis-based computer company: Control Data Corporation (CDC). For the next two decades Control Data would build the fastest scientific computers in the world.

Read part 18 here and all the Secret History posts here


Even the Smartest VCs Sometimes Get it Wrong – Bill Gurley and Regulated Markets

Bill Gurley was one of Silicon Valley’s smartest and most successful VCs. He recently gave a talk at the All-In Summit that was really two talks in one. The first part was railing against the consequences of regulatory capture on innovation and a second part, about the consequences of premature government regulation of AI and why the incumbents are all for it. He illustrated his talk with regulatory horror stories in the telecom market, electronic health records, and Covid antigen tests.

Bill’s closing line, “The reason why Silicon Valley is so successful is that it’s so fxxxng far away from Washington” received great applause. Unfortunately, for startups entering a regulated market following this advice this might not be the optimum path.

(You can watch Bill’s entire 24-minute talk here or his thesis summarized in this 7 second clip here. https://youtu.be/HMIyDf3gBoY?feature=shared )


Let’s be clear, rent seekers and regulatory capture strangle innovation in its crib. It’s the antithesis of how founders want to build a business. (And to be fair that was the was the point of the last part of Bill’s presentation.) But entrepreneurs entering regulated markets need to understand how the game is played, how they can play it, what their VC’s should be doing to help them, and how to win.

Regulation
What’s regulatory capture? Why is it bad? And why was Bill’s advice of staying away from Washington flawed for startups?

All businesses have regulations to follow – paying taxes, incorporating the company, complying with financial reporting. And some have to ensure that there are no patents or blocking patents. But regulated markets are different. Regulated marketplaces have significant government regulation to promote and protect (ostensibly) the public interest for the benefit of all citizens. A good example is the regulations the FDA (Food and Drug Administration) have in place for approving new drugs and medical devices.

In a regulated market, the government controls how products and services are allowed to enter the market, what prices may be charged, what features the product/service must have, safety of the product, environmental regulations, labor laws, domestic/foreign content, etc. In the U.S. regulation happens on three levels:

  • federal laws that are applicable across the country developed by Federal government in Washington, D.C.
  • state laws that are applicable in one state imposed by state government
  • local city and county laws come from local government

Federal Regulation
In the U.S. the government has regulatory authority over commerce between the states, foreign trade, and other business activities of national scope. Congress decides what things need to be regulated and passes laws that determine those regulations. Congress often does not include all the details needed to explain how an individual, business, state or local government, or others might follow the law. To make the laws work day-to-day, Congress authorizes government agencies to write the regulations which set the specific requirements about what is legal and what isn’t. The regulatory agencies then oversee these requirements.

In the U.S. startups might run into an alphabet soup of federal regulatory agencies, for example: ATF, CFPB,DEA, DoD, EPA, FAA, FCC, FDA, FDIC, FERC, FTC, OCC, OSHA, SEC. These agencies exist because Congress passed laws. 

State Regulation
In addition to federal laws, each State has its own regulatory environment that applies to businesses operating within the state in areas such as land-use, zoning, motor vehicles, state banking, building codes, public utilities, drug laws, etc.

Cities/County Regulation
Finally, local cities and counties may have local laws and regulatory agencies or departments like taxi commissions, zoning laws, public safety, permitting, building codes, sanitation, drug laws, etc.

Incumbents Advantage – Rent Seekers and Regulatory Capture
If you’re a startup entering a regulated market (Telecom, Pharma, Education, Energy, Department of Defense, Intelligence, Health, Fintech, Insurance, Transportation, Agriculture, Gaming, Cannabis, Petrochemicals, Automotive, Air Transportation, Fishing, et al.) you need to know that the game is rigged. And it’s not in your favor.

Incumbents in a regulated a market keep out new, innovative, and disruptive competitors  by “gaming the system” in their favor. They do this by either being Rent Seekers and/or by Regulatory Capture. (Bill Gurley’s point.)

Rent seekers are individuals or organizations with successful existing business models who use government regulation and lawsuits to keep out new entrants that might threaten their business models. They use every argument – from public safety to lack of quality or loss of jobs – to lobby against the new entrants. Rent seekers spend money lobbying to increase their share of an existing market instead of creating new products or markets but create nothing of value.

These barriers to new innovative startups are called economic rent. Examples of economic rent include state automobile franchise laws, taxi medallion laws, limits on charter schools, cable company monopolies, patent trolls, bribery of government officials, corruption, and regulatory capture.

Rent-seeking lobbyists go directly to legislative bodies (Congress, State Legislatures, City Councils) to persuade government officials and their staff to enact laws and regulations in exchange for campaign contributions, appeasing influential voting blocks, or the “revolving door” – offering officials future jobs in the industry they regulated. They use the courts to tie up and exhaust a startup’s limited financial resources. Their lobbyists also work through regulatory bodies like the FCC, SEC, FTC, Public Utility, Taxi, or Insurance Commissions, School Boards, etc.

Regulatory capture is what happens when the very organizations set up to protect the public’s health and safety, or to provide an equal playing field, are taken over by the very people they’re supposed to regulate. These are the examples Bill Gurley were talking about.

Tech Companies Use Regulatory Capture
In my first two decades inside the Silicon Valley bubble we built products people wanted and needed. We competed with other technology companies, and, like Bill Gurley, largely ignored whatever was going on in Washington. We were content Washington didn’t know we existed. Unless you were in life sciences (therapeutics, medical devices, or diagnostics), very little government regulation applied. We ignored Washington and Washington mostly ignored us (defense contractors excepted.)

The tech ecosystem got a rude awakening in May 1998 when the U.S. Justice Department and 20 state Attorneys General brought suit again Microsoft for anticompetitive practices designed to maintain its monopoly in PC operating systems and internet browsers. While tech hadn’t  come to Washington, Washington came for the tech industry. Until then no tech company had an organized lobbying organization of significance in DC.

Fast forward 25 years. The tech industry grew up and realized rather than running away from Washington they needed to play the game. Companies like Intuit mastered regulatory capture as a massive advantage while Big Tech (Microsoft, Amazon, Google, Facebook, Oracle, Intuit, Uber et al.) spent $124 million in lobbying and campaign contributions in the 2020 election with 333 registered lobbyists.

Startups have successfully disrupted regulated markets and rent seekers – Uber with local taxi licensing laws (a board Bill Gurley sat on with a ShowTime series highlighting his role), AirBnB with local zoning laws, Tesla with state dealership licensing, SpaceX competing with the Air Force and United Launch Alliance – and in doing so they have built impenetrable moats for their business.

What Do Startups Need to Know?
There’s nothing magical about dealing with regulated markets. However, every regulated market has its own rules, dynamics, language, players, politics, etc. And they are all very different from the business-to-consumer or business-to-business markets most founders and their investors are familiar with.

How do you know you’re in a regulated market? It’s simple– ask yourself three questions:

  • Can I do anything I want or are there laws and regulations that might stop me or slow me down?
  • Are there incumbents who will view us as a threat to the status quo? Can they use laws and regulations to impede our growth?
  • Do you understand how the regulatory process works? For example, do you just fill out an online form and pay a $50 fee with your credit card and get a permit? Or do you need to spend millions of dollars and years running clinical trials to get FDA clearance and approval? And are these approvals good in every state? In every country? What do you need to do to sell worldwide?

What Do I Need to Do?
The first step is to understand what you’re up against. Who are the incumbents, who do they influence, how much are they spending on influence, who are their lobbyists, and what are their messages? And most importantly, how are they going to stop you from scaling?

Next, figure who are the other stakeholders, saboteurs, rent seekers, influencers, bureaucrats, politicians, and regulators. As you get out of the building and start talking to people you’ll discover more and more players. You’ll discover that the interests of your product’s end user versus a regulator versus an advocacy group, key opinion leaders or a politician, are radically different. For you to succeed you need to understand all of them.

Start diagraming out the relationships of all the customer segments. Who influences who? How do they interconnect? What laws and regulations are in your way for deployment and scale? How powerful are each of the players? For the politicians, what are their public positions versus actual votes and performance. Follow the money by using opensecrets.org. If an elected official’s major donor is organization x, you’re not going to be able to convince them with a cogent argument.  And most importantly, start asking “who are the best lobbyists/advisors in this market?”

The book Regulatory Hacking calls this diagram the Power Map. As an example, this is a diagram of the multiple beneficiaries and stakeholders that a software company developing math software for middle school students has to navigate. Your diagram may be more complex. There is no possible way you can draw this on day one of your startup. You’ll discover these players as you get out of the building and start filling out your value proposition canvases.

While this sounds complicated, entering a regulated market should be a strategy not a disconnected set of tactics. (Or worse obliviousness.) You need a lobbying/government relations strategy from day one.

Draw your strategy diagram (see figure below) and share it with your board. What regulatory issues need to be solved? In what order? For example, do you beg for forgiveness or ask for permission? How do you get regulators who don’t see a need to change to move? How do you get your early customers to advocate on your behalf? (The books The Fixer and Regulatory Hacking give examples of regulatory pitfalls, problems and suggested solutions.)

Most early stage startups don’t have the regulatory domain expertise in-house. Get outside advice at each step. Hire/advisors from the inside industry but use them to make you smarter not just to outsource the work. Having a meeting or two with a congressman or contributing to their campaign might get you a return call, but only sustained engagement (via money, influence, and an on-the-ground presence in D.C.) will move the needle. Eventually you’ll need to build an in-house team to manage regulatory affairs.

 Choose VCs who have experience in operating in regulated markets – not those who hope it stays away.  Have them tell you how they helped other companies in their portfolio succeed, pitfalls to avoid, and the lobbying resources they can bring to bear. You and your board need to be in sync about the costs and risks of getting into a street fight entering these markets. (Strategic choices include asking for permission versus forgiveness, public versus private battles. Tactical activities can include influencing key opinion leaders, political donations, advocacy groups, and grassroots and grasstops campaigns, etc.)

Finally, as an innovation ecosystem (VCs, their limited partners, and startups) we need to do a better job in insisting in transparency in government, calling out rent seekers and regulators who no longer regulate, and try to keep government from premature regulation of new innovation. For the majority of regulators and policymakers who want to make the system better, we can help shape policy by educating them on why the products/changes we are proposing make the world  a better place.

But startups? They need to understand the game and work the system.

Post note. Ironically the best example of premature government regulation was AT&T and U.S. telephone service. In 1921 AT&T argued that telephone service was a natural monopoly, and that competition was inefficient. The government agreed and land line communications became a government sanctioned monopoly for the next 63 years. Innovation in telecom outside of AT&T died and the industry could only innovate as fast as AT&T approved. A possible proxy for why the incumbent AI providers went to Congress. They want to lock-in their lead.

Lessons Learned

  • If you’re in regulated market, often the game is rigged by incumbents
    • Understand Rent Seeking and Regulatory Capture
    • You need a lobbying/government relations strategy from day one
  • Choose VCs who understand how to play the game not those who hope it stays away
  • The CEO needs to get out of the building to understand the regulatory ecosystem
    • CEO and board need to be in sync about the learning and strategy
  • Hire initial lobbyists (but learn from them, not just outsource to them)
    • As the company gets larger staff an internal public affairs group to manage the lobbying effort
  • If you figure out the regulatory game, it can be your defensible moat

Leaving Government for the Private Sector – Part 2

Laura Thomas is a former CIA operations officer. Reading how she moved in 2021 from CIA ops to a quantum technology company offered insightful career transition advice for those leaving her agency. Most of her lessons were applicable to any government employee venturing out to the private sector.

Below is the second of her three-part series. Read part one here.


Before leaving government service one of my biggest challenges was to understand how my skill as a Case Officer would translate into a job in the commercial world. I had to spend a lot of time learning a new language and new job descriptions. Here’s what I learned.

What would you like to do/can do? Some commercial company roles:

Business Development or “BD” roles: Case Officers are well suited for business development (BD) roles as its akin to first half of the CIA recruitment cycle. In a business development role you’re out shaping the perception of your company in the market (networking), determining leads, and contacting leads. The larger the company, the more they’ll separate out business development and sales, with business development focused primarily on lead generation and sales focused on sealing the actual sale of the product or service.

Sales roles: The sales cycle is similar to the recruitment cycle of a source. At a small company, you have the ability to do the whole sales cycle, which integrates strategy, business development, sales, and customer success: figure out what you should sell, who you should sell it to, how to get in touch with them, actually get in touch with them, sell it, keep selling to them and make sure they’re happy (customer success), and at some point, decide whether to move on to better sales targets, or convince your company they need to be selling something different. At a large company, sales usually means someone else has done the broad shaping for a potential customer. You just have to go in and work through the mechanics of selling them on your product or service.

Customer Success roles: This is akin to handling a source. You make sure the customer is happy and keeps buying, preferably more.

Security roles: Some ex-Agency people gravitate to roles in security. I discovered that while I know a lot about tradecraft-related security and how to stay alive for the first minutes of an ambush, I know little about building security and computer systems security. Some companies will see your CIA background and confuse it with roles that are more akin to FBI or law enforcement. If you worked in an actual cybersecurity or security role, you can learn it and integrate well into those teams.

Trust and Safety roles, Threat and Business Intelligence roles: If you’ve been a targeter and/or an analyst these might be good fits. The role broadly is to protect a company and its people/users (or multiple companies) by tracking bad actors and threats. In large companies these roles report to a security division (however there are entire companies  just providing Threat and Business Intelligence).

Government Affairs/Legislative Affairs roles: Large companies pay to have people represent them on Capitol Hill and advocate for their interests. If you have significant experience engaging with and briefing the Hill, this is a possibility, however you’ll be competing against staffers rotating off committees who are actually much better equipped than you as far as networking and know-how. You may be able to join a larger company’s government affairs team at a more junior to mid-level, and you’ll probably find your skills most relevant to a company that works on national security-related issues.

At first many start-ups hire a lobbying firm. You may be able to step in once they want to transition into an in-house role for this, but keep in mind that they’re looking for the Capitol Hill contacts you already have, as well as your ability to work the legislative process, not just your briefing or networking skills.

Strategy and Operations roles: These roles help make sure vision, resources (budgets and people), and the market opportunity are aligned. Working closely with the CEO or CFO, they help figure out what to do to make things go right, and what to do when things go wrong. The smaller the company, the bigger your chance at a role like this.

Chief of Staff role, for example, is largely a strategy role, but is heavily dependent on the needs of the CEO/company. In my case, at Infleqtion I’m the person who tells our CEO what he needs to hear, not necessarily what he wants to hear. I also serve as an executive advisor – from product strategy to setting business milestones to working with investors. I also work closely with all members of the executive team, the Board of Directors, and Advisory Board. I think this role is ideal for a former Case Officer, but I’m obviously biased.

Larger companies hiring a Chief of Staff often look for someone who has an MBA, experience with one of the big consulting firms, or experience doing the job already.

Entrepreneur: A successful CIA case officer must be able to operate amid ambiguity and make judgment calls that require strong second- and third-order thinking. Achievement-focused and good storytellers, they know how to figure things out, “read the room,” and assess and mitigate risk. Most people believe case officers and entrepreneurs are big risk takers, when, in fact, they’re risk mitigators.

If you find an A-player CIA officer jumping into a founder role mid-way in their career (or decide to start something yourself,) they’ll probably go on to do great things. They have enough confidence in themselves to leave without the safety net of a future pension as well as the energy, ambition, and know-how to navigate uncertainty. The same Emotional Quotient and approach that attracts investors will also attract excellent employees.

Venture Capitalist: An early-stage VC requires some of the same skills as a Case Officer – spotting, assessing, developing, recruiting, and handling founders building a company amid an uncertain operating environment that will bring a heavy return on investment. (However, many VCs have also accrued years/decades as domain experts in the technologies/and or industries they invest in.) Being a successful VC and successful case officer both involve some levels of luck and timing misattributed to skill. The biggest difference is in the VC world, nobody is going to die.

If you’re a Retiree leaving with a full pension – you have different choices than a “job.” You can:

  • consult
  • sit on a company Advisory Board or Board of Directors
  • serve as a senior executive at a small company (you’ll be expected to actually work, not pontificate and delegate) or mid- to senior level at a larger company (you might just be a face)
  • get hired by Wall Street/Private Equity/VC firms assuming you’re senior enough and have enough New York or Silicon Valley connections

For 2-4, you’re generally being hired for your name and the introductions you can make assuming you’re within the top 15 of leadership.

Boards: The term “board” can mean two very different things in the commercial world – an Advisory Board versus a Board of Directors. An Advisory Board provides advice. It has no legal role in the company. Often companies will put you on their advisory board just to use your name and image (and not really want your advice). Every company can organize and compensate its advisory board any way it likes. Some Advisory Boards meet once a quarter, others once a year. Advisory Board members may field weekly to monthly emails and calls from the company executive team to provide feedback on strategy and positioning and make introductions. Advisory Board members are often paid in a balance of equity (stock options) and cash (“cash” is the industry term for money wired to your bank account).

A Board of Directors has a formal and legal role. It provides governance and financial oversight to the company. They can vote to hire and fire the CEO. CEOs seek their advice (and often must seek their formal approval) for major strategic decisions such as acquisitions, major budget changes, hiring of C-level executives, etc.) Formal Board positions are harder to come by. If you’re an A-player from the senior-most ranks, consider joining a private company board if you’re aligned with their mission and team. They need you.

For me, personally: People in the senior ranks at startups usually call themselves operators. Obviously, that’s a different definition of the term. I knew I wanted to stay/go into an operator role because that’s where the business learning I sought would happen. I didn’t want to have to sell back into the intelligence community, because I didn’t want to leverage my contacts so tactically, but plenty of people do it (and we need good people to do it. We all know how badly the government needs commercial technology solutions). From the start, my job was closest to a business development role. Because it was a small company and I was going from top-down with the CEO rather than responding to a job advertisement, I was able to craft my function and initial title as, “Senior Director of National Security Solutions.” I began writing unsolicited strategy docs for the CEO. This ultimately led me into a strategy role, which led me into a strategy and fundraising role. I also took an advisory role with another startup working on national security technology, QuSecure.

Where should you go? Big company or small?  Choose big for stability and higher salaries. Choose small for learning, growth, and impact. In large companies, they usually want you in a narrow and specific role. However, you will have more roles you could move into if the first one isn’t a great fit. If you join a big company, assuming it’s public, you’ll get stock which immediately can translate into financial gains assuming the company performs well. The salaries are almost always higher. You can get rich in a big company (at least by our humble government standards), but rarely wealthy based on returns from that company alone.

At small companies, you wear many hats at once. I wanted to understand the daily challenges a company faced at the senior levels in trying to push a new technology in government markets and commercial markets, and how capital flows impacted all of this. However, a bigger company is more defined in terms of a 9-to-5. I work just as much now as I did in the field. And though I work from an office most days, I also work from home, which affords a lot of flexibility because I’m not chained to a SCIF.

You can get wealthy with the right startup, but many startups fail, so it’s a long shot. Of course, “wealth” is subjective. More than money, most of us crave impact. Both are possible on the outside.

How should you think about and mitigate risk if joining a startup? Know your appetite for risk. If you’re really bold, join an early-stage company (seed stage, Series A), but have conviction about the team. You may need to cover some portion of your own salary for a year. If you need to make a salary equivalent to what you make in government, target startups that have closed a Series B round within the last few months. If you’ve received a formal offer from a startup, ask how much runway (months of cash left) they have. If they won’t discuss any aspects of runway or value of the equity package they’re offering, look elsewhere.

Look before you leap. Talk with multiple employees at the company. Try to talk with an investor in the company. Research their Board of Directors and Advisory Board members and contact some of them. Look for people on LinkedIn who used to work at the company, reach out to them and ask why they left.

Being part of a “failed” startup is not a badge of dishonor. Most startups fail, especially those in the early stages. So long as you and the company weren’t operating unethically and illegally, it’s not a red flag on your resume. In fact, this sort of experience matters far more to the next prospective tech startup employer than the decade+ that you put in at the Agency.

Action:

A) If you’re an A-player, stay in government.
B) If you’re an A-player and leave, do great things on the outside and return to government service at some point.

Coming up next:

  • Part III – title, compensation (salary + equity + bonuses) and resources you can use.

Read the rest of Laura’s blogs at https://www.lauraethomas.com/

Leaving Government for the Private Sector – Part 1

Laura Thomas is a former CIA operations officer. Reading how she moved in 2021 from CIA ops into a quantum technology company offered insightful career transition advice for those leaving her agency. Most of her lessons were applicable to any government employee venturing out to the private sector.
Below is the first of her three-part series.

—-

At least a few times a month, people looking to jump ask about my transition, which has led to me consolidating my answers below. To be up front, some of what I write will be controversial and all of it is biased. Due to length, I’ve broken it up into a three-part series.


Is it really a big jump to the private sector? It wasn’t a big jump. At the Agency, 85% of my time was spent navigating bureaucracy and equities, arguing for resources and permission for operations, and dealing with the bottom rung of employees, all while making decisions with little data or data overload. Only 15% of my time was doing the more exciting operations. Though that 15% – along with the camaraderie of some of my colleagues – made the work deeply meaningful.

Industry is similar. Human nature is human nature, and I deal with many of the same challenges and pull many of the same levers of satisfaction. The difference is my decisions now aren’t life or death.

Another large difference is the greater level of autonomy I now have. Making decisions on the fly in operations is an extreme example of autonomy, of course, but there is always a back-end overhead. Depending on company culture, decision-making can be driven dramatically down with less overhead. As an example, I can make direct recommendations to Congress with no oversight, no internal reporting requirements, and with the trust of the CEO and Board.

Do you miss it? Yes. Nothing beats the rush of bumping a target who agrees to meet with you again or landing in a foreign country for the first time. I no longer know the stories behind the headlines, and I’m not the person making those stories happen. Aside from close friends, I am now treated as an “outsider” by former colleagues.

Fortunately, I still work with smart people solving hard problems every day. And there is still meaning in what I do. Raising tens of millions of dollars from investors to advance a technology faster than the Chinese Communist Party uses the same skillset. Learning how M&A deals are structured gives me the same thrill as first learning the mechanics of a surveillance detection route. It’s the excitement of being a beginner again, but one with deep and profound experiences, which blunts the downs and enhances the ups that you will face post-Agency.

Today, I get to move our national security mission in emerging technologies farther and faster in ways that I could not in government. And while there is some level of self-justification in these statements, there is nonlinearity in industry. You can move at exponential speed.

How do you transfer your old skills to your current role? Driving decisions, organizational change, and operations in a deep tech company presents many of the same challenges and opportunities as my time in government. Leading and managing people amid uncertainty, high degrees of change, and making decisions remain my day-to-day functions. My current role as a Chief of Staff is in many ways like a DCOS (deputy chief of station) or a traditional Chief of Staff in government. I work behind the scenes, and sometimes out front, to shape our company vision, strategy and then execute, measure, and refine. (Rather than giving away bags of cash in my old job, I now ask for money from investors.)

Relationship dynamics are the same, minus the burden of extreme secrecy. All the things that most of the outside world doesn’t understand as being critical to a handler-asset relationship are just as critical to relationships in industry. Judgment remains paramount.

In the Agency I dealt with a few difficult personalities focused on empire-building and metrics rather than running sound operations. You likely will still deal with this in industry, though there are far fewer layers and entrenched interests to deal with. Knowing how to navigate various stakeholders and interests, avoid landmines, and bring people together is an extremely useful skill in industry. If you’ve been a “doer” who knows how to communicate, work, and gain buy-in across an enterprise that is geographically dispersed, as well as with and against external third parties who are frenemies (or outright hostile), this will serve you well in industry. Talk about it when you’re seeking jobs and interviewing.

Did you make any resume missteps? Most often your resume is not what will get you a job, and submitting one to a recruiter or resume bank is not the right move. Odds are your resume is almost certainly written in government-speak, and probably more terrible than you realize. It likely talks about all the jobs you held (to the degree you can share) and the dates and maybe the general locations but says nothing about what you actually accomplished or how it specifically relates to industry. You probably won’t even get beyond the AI filter.

Having a resume that says you served in country X and wrote reports that went to policymakers, and “the President,” might get you a curiosity interview, but won’t get you a job. Unless you can translate how your skills provide commercial value, you won’t get hired.

For starters, first figure out which industry you want to work in, narrow it down, and work hard to get intros at the senior levels to a handful of companies (Board of Directors member, Advisory Board member, member of the C-suite (CEO, CTO, CFO, etc), and/or investor.) You have to do a lot of networking to create your list and build your network. Find a way to meet and captivate them with a story of what you did, and how your skills can transfer this to industry and add value to their company.

An early learning point for me came as I was speaking with a prospective VC about a job. He flat-out told me he didn’t understand my value to the company. He asked point blank, “How much money did you net the U.S. Government over your career, what exactly did you do in order to get those results, and how would you bring me those same returns?”

You will get asked a question like this.

My suggestion is to say something along these lines: “It’s exponentially harder to be hired by the Agency than it is to get into Harvard, and not only was I hired based on an assessment of my judgment and the ability to operate in ambiguous situations, I then was trained to do just that, and then did it for years.

I was entrusted to create and carry out some of the most sensitive and most important missions that the U.S. Government conducts, often with little direction. Not only did I have to plan and do them, I had to do so in secret, with lives on the line, which is hard to put a price tag on.

You can give me your toughest problem, and I will figure out how to solve it in record time with buy-in from those whom you rarely get buy-in, and position you for multiple shots on goal for future opportunities because I will have your company and sector wired. I can do for you what I did for our country: evaluate opportunity, mitigate risk, and make quick and smart decisions that attack problems differently than a typical insider would. I’ll turn my salary into millions of dollars in returns or investments within two years – not singlehandedly – but in a cooperative way that leverages many parts of the company. We’ll row in unison and we’ll row in the right direction.”

How did you get your current job? I networked nonstop and ran a full targeting campaign for multiple companies to get to their CEOs. I didn’t have a resume when I was looking for jobs. I had to find senior people who had left the agency who would vouch for me.

For my current company Infleqtion, I was introduced to a former senior Intelligence Community official who previously served on a board with the CEO, who made an introduction. When we met I asked the CEO his challenges and outlined how I might be able to help. Five months later, the CEO called and said he may have a job for me and invited me to visit and speak with others in the company for their input. I received an offer shortly thereafter.

Meanwhile, three years before I left the Agency I had done a cold outreach on LinkedIn to the person I suspected was the hiring manager for a job advertisement for a company that I liked. The person told me they wanted someone with more business experience for the role, but then came calling three years later when another role opened that they thought would be a good fit. Ultimately, I met each layer up in that company including the CEO.

This all came in handy when negotiating salary, title, and function. From the many, many hours of networking hustle, I received two job offers, which happened in parallel, and I negotiated around the same title and compensation levels. Throughout the entire process, I forwarded them relevant articles and commentary on opportunities to demonstrate my value. Ultimately, I chose Infleqtion because of its mission, its people, and its reputation amid US Government circles.

Action: A) If you’re an A-player, stay in government. B) If you’re an A-player and leave, do great things on the outside and return to government service at some point.

Coming up next:

•  Part II – what are the criteria for choosing your next role, the most common types of business roles that formers go into, and how to think about big vs small company risks and current markets.

•  Part III  – title, compensation (salary + equity + bonuses) and resources you can use.

Read the rest of Laura’s blogs at https://www.lauraethomas.com/

Profound Beliefs

This post previously appeared in EIX.

In the early stages of a startup your hypotheses about all the parts of your business model are your profound beliefs. Think of profound beliefs as “strong opinions loosely held.”

You can’t be an effective founder or in the C-suite of a startup if you don’t hold any.

Here’s how I learned why they were critical to successful customer development.


I was an aggressive, young and a very tactical VP of marketing at Ardent, a supercomputer company – who really hadn’t a clue about the relationship between profound beliefs, customer discovery and strategy.

One day the CEO called me into his office and asked, “Steve I’ve been thinking about this as our strategy going forward. What do you think?” And he proceeded to lay out a fairly complex and innovative sales and marketing strategy for our next 18 months.  “Yeah, that sounds great,” I said. He nodded and then offered up, “Well what do you think of this other strategy?” I listened intently as he spun an equally complex alternative strategy. “Can you pull both of these off?” he asked looking right at me.  By the angelic look on his face I should have known that I was being set up. I replied naively, “Sure, I’ll get right on it.”

Ambushed
Decades later I still remember what happened next. All of a sudden the air temperature in the room dropped by about 40 degrees. Out of nowhere the CEO started screaming at me, “You stupid x?!x. These strategies are mutually exclusive. Executing both of them would put us out of business. You don’t have a clue about what the purpose of marketing is because all you are doing is giving engineering a list of feature requests and executing a series of tasks like they’re like a big To Do list. Without understanding why you’re doing them, you’re dangerous as the VP of Marketing, in fact you’re just a glorified head of marketing communications.  You have no profound beliefs.”

I left in a daze, angry and confused. There was no doubt my boss was a jerk, but I didn’t understand the point. I was a great marketer. I was getting feedback from customers, and I’d pass on every list of what customers wanted to engineering and tell them that’s the features our customers needed. I could implement any marketing plan handed to me regardless of how complex. In fact I was implementing three different ones. Oh…hmm… perhaps I was missing something.

I was executing a lot of marketing “things” but why was I doing them? The CEO was right. I had approached my activities as simply a task-list to get through. With my tail between my legs I was left to ponder: What was the function of marketing in a startup? And more importantly, what was a profound belief and why was it important?

Hypotheses about Your Business Model = Your Profound Beliefs Loosely Held
Your hypotheses about all the parts of your business model are your profound beliefs. Think of them as strong opinions loosely held. You can’t be an effective founder or in the C-suite if you don’t have any.

The whole role of customer discovery and validation outside your building is to inform your profound beliefs. By inform I mean use the evidence you gather outside the building to either validate your beliefs/hypotheses, invalidate or modify them.  Specifically, what beliefs and hypotheses?  Start with those around product/market fit – who are your customers and what features do they want? Who are the payers? Then march through the rest of the business model. What price will they pay? What role do regulators pay? Etc. The best validation you can get is an order. (BTW, if you’re creating a new market, it’s even OK to ignore customer feedback but you have to be able to articulate why.)

The reality of a startup is that that on day one most of your beliefs/hypotheses are likely wrong. However, you will be informed by those experiments outside the building, and data from potential customers, partners, regulators, et al will modify your vision over time.

It’s helpful to diagram the consequences between hypotheses/ beliefs and customer discovery. (See the diagram)

If you have no beliefs and haven’t gotten out of the building to gather evidence, then your role inside a new venture is neutral. You act as a tactical implementer as you add no insight/or value to product development.

If you’ve gotten out of the building to gather evidence but have no profound beliefs to guide your inquiries, then your role inside a new venture is negative. You’ll collect a laundry-list of customer feature requests and deliver them to product development, without any insight. This is essentially a denial of service attack on engineering’s time. (I was mostly operating in this box when I got chewed out by our CEO.)

The biggest drag on a startup is those who have strong beliefs but haven’t gotten out of the building to gather evidence. Meetings become opinion contests and those with the loudest voices (or worse “I’m the CEO and my opinion matters more than your facts”) dominate planning and strategy.  (They may be right, but Twitter/X is an example where Elon is in the box on the bottom right of the diagram. )

The winning combination is strong beliefs that are validated or modified by evidence gathered outside the building. These are “strong opinions loosely held.”

Strategy is Not a To Do List, It Drives a To Do List
It took me awhile, but I began to realize that the strategic part of my job was to recognize that (in today’s jargon) we were still searching for a scalable and repeatable business model. Therefore my job was to:

  • Articulate the founding team’s strong beliefs and hypotheses about our business model
  • Do an internal check-in to see if a) the founders were aligned and b) if I agreed with them
  • Get out of the building and test our strong beliefs and hypotheses about who were potential customers, what problems they had and what their needs were
  • Test product development’s/engineering’s beliefs about customer needs with customer feedback
  • When we found product/market fit, marketing’s job was to put together a strategy/plan for marketing and sales. That should be easy. If we did enough discovery customers would have told us what features were important to them, how we compare to competitors, how we should set prices, and how to best sell to them

Once I understood the strategy, the tactical marketing To Do list (website, branding, pr, tradeshows, white papers, data sheets) became clear. It allowed me to prioritize what I did, when I did it and instantly understand what would be mutually exclusive.

Lessons Learned

  • Profound beliefs are your hypotheses about all the parts of your business model
    • No profound beliefs but lots of customer discovery ends up as a feature list collection which is detrimental to product development
    • Profound beliefs but no customer discovery ends up as opinion contests and those with the loudest voices dominate
  • The winning combination is strong beliefs that are validated or modified by evidence gathered outside the buildingThese are “strong opinions loosely held.”

Before there was Oppenheimer there was Vannevar Bush

I just saw the movie Oppenheimer.  A wonderful movie on multiple levels.

But the Atomic Bomb story that starts at Los Alamos with Oppenheimer and General Grove misses the fact that from mid-1940 to mid-1942 it was Vannevar Bush (and his number 2, James Conant, the president of Harvard) who ran the U.S. atomic bomb program and laid the groundwork that made the Manhattan Project possible.

Here’s the story.


During World War II, the combatants (Germany, Britain, U.S. Japan, Italy, and the Soviet Union) made strategic decisions about what types of weapons to build (tanks, airplanes, ships, submarines, artillery, rockets), what was the right mix (aircraft carriers, fighter planes, bombers, light/ medium/ heavy tanks, etc.) and how many to build.

But only one country – the U.S. — succeeded in building nuclear reactors and nuclear weapons during the war, moving from atomic theory and lab experiments to actually deploying nuclear weapons in a remarkable 3 years.

Three reasons unique to the U.S. made this possible:

  1. Émigré and U.S. physicists who feared that the Nazis would have an atomic bomb led to passionate advocacy before the government became involved.
  2. A Presidential Science Advisor who created a civilian organization for building advanced weapons systems, funded and coordinated atomic research, then convinced the president to authorize an atomic bomb program and order the Army build it.
  3. The commitment of U.S. industrial capacity and manpower to the atomic bomb program as the No. 1 national priority.

The Atom Splits
In December 1938 scientists in Nazi Germany reported a new discovery – that the Uranium atom split (fissioned) when it hit with neutrons. Other scientists calculated that splitting the uranium atom released an enormous amount of energy.

Fear and Einstein
Once it became clear that in theory a single bomb with enormous destructive potential was possible, it’s hard to understate the existential dread, fear, and outright panic of U.S. and British emigre physicists – many of them Jewish refugees who had fled Germany and occupied Europe. In the 1920s and ‘30s, Germany was the world center of advanced physics and the home of many first-class scientists. After seeing firsthand the terror of Nazi conquest, the U.S. and British understood all too well what an atomic bomb in the hands of the Nazis would mean. They assumed that German scientists had the know-how and capacity to build an atomic bomb. This was so concerning that physicists convinced Albert Einstein in August 1939 to write to President Roosevelt pointing out the potential of an atomic weapon and the risk of the bomb in German hands.

Motivated by fear of a Nazi atomic bomb, for the next two years scientists in the U.S. lobbied, pushed and worked at a frantic speed to get the government engaged, believing they were in a race with Nazi Germany to build a bomb.

After Einstein’s letter, Roosevelt appointed an Advisory Committee on Uranium. In early 1940 the Committee recommended that the government fund limited research on Uranium isotope separation. It spent $6,000.

Vannevar Bush Takes Over – National Defense Research Committee (NRDC)
European émigré physicists (Einstein, Fermi, Szilard, and Teller) and Ernest Lawrence at Berkeley were frustrated at the pace the Advisory Committee on Uranium was moving. As theorists, they thought it was clear an atomic bomb could be built. They wanted the U.S. government to aggressively fund atomic research, so that the U.S. could build an atomic bomb before the Germans had one.

They weren’t alone in feeling frustrated about the U.S. approach to advanced weapons, not just atomic bombs.

In June 1940 Vannevar Bush, ex-MIT dean of engineering; and a group of the country’s top science and research administrators (Harvard President James Conant, Bell Labs President and head of the National Academy of Sciences Frank Jewett, and Richard Tolman Caltech Dean) all felt that there was a huge disconnect. The U.S. military had little idea of what science could provide in the event of war, and scientists were wholly in the dark as to what the military needed. As a result, they believed the U.S. was woefully unprepared and ill-equipped for a war driven by technology.

This group engineered a massive end run around the existing Army and Navy Research and Development labs. Bush and others believed that advanced weapons could be created better and faster if they could be designed by civilian scientists and engineers in universities and companies.

The scientists drafted a one-page plan for a National Defense Research Committee (NDRC). The NDRC would look for new technologies that the military labs weren’t working on (radar, proximity fuses, and anti-submarine warfare. (At first, atomic weapons weren’t even on their list.)

in June 1940 Bush got Roosevelt’s approval for the NDRC. In a masterful bureaucratic sleight of hand the NDRC sat in the newly created Executive Office of the President (EOP), where it got its funding and reported directly to the president. This meant that the NDRC didn’t need legislation or a presidential executive order. More importantly it could operate without congressional or military oversight.

Roosevelt’s decision gave the United States an 18-month head start for employing science in the war effort.

The NRDC was divided into five divisions and one committee, each run by a civilian director and each having a number of sections. (see diagram below.)

Bush became chairman of the NDRC and the first U.S. Presidential Science Advisor systematically applying science to develop advanced weapons. The U.S., alone among all the Axis powers and Allied nations, now had a science advisor who reported directly to the president and had the charter and budget to fund advanced weapon systems research – outside the confines of the Army or Navy.

NRDC was run by science administrators, who had managed university researchers as well as complex research and applied engineering projects science before. They took input from theorists, experimental physicists, and industrial contractors, and were able to weigh the advice they were receiving. They understood the risks, scale and resources needed to turn blackboard theory to deployed weapons. Equally important, they weren’t afraid to make multiple bets on a promising technology nor were they afraid to kill projects that seemed like dead ends for the war effort.

200+ contracts
Prior to mid 1940 research in U.S. universities was funded by private foundations or companies. There was no government funding. The NRDC changed that. With a budget of $10,000,000 to fund research proposed by the five section chairmen, the NDRC funded 200+ contracts for research in radar, physics, optics, chemical engineering, and atomic fission.

For the first time ever, U.S. university researchers were receiving funding from the U.S. government. (It would never stop.)

The Uranium Committee
In addition to the five NRDC divisions working on conventional weapons, the NRDC took over the moribund standalone Uranium Committee and made it a scientific advisory board reporting directly to Bush. The goal was to understand whether the theory of an atomic weapon could be turned into a practical weapon. Now the NRDC could directly fund research scientists to investigate ways to separate for U-235 to make a bomb.

What Didn’t Work at the NRDC?
After a year, it was clear to Bush that while the NDRC was funding advanced research, the military wasn’t integrating those inventions into weapons. The NRDC had no authority to build and acquire weapons. Bush decided what he needed was a way to bypass traditional Army and Navy procurement processes and get those advanced weapons built. 

Read the sidebars for background.

The Office of Scientific Research and Development Stands Up
In May 1941 Bush went back to President Roosevelt, this time with a more audacious request: Turn NRDC into an organization that not only funded research but built prototypes of new advanced weapons and had the budget and authority to write contracts to industry to build these weapons at scale. In June 1941 Roosevelt agreed and signed the Executive Order creating the Office of Scientific Research and Development (OSRD).  (It’s worth reading the Executive Order here to see the extraordinary authority he gave OSRD.)

OSRD expanded the National Defense Research Committee’s (NDRC) original five divisions into 19 weapons divisions, five research committees and a medical portfolio. Each division managed a broad portfolio of projects from research to production, and deployment. Its organization chart is shown below.

These divisions spearheaded the development of an impressive array of advanced weapons including radar, rockets, sonar, the proximity fuse, Napalm, the Bazooka and new drugs such as penicillin and cures for malaria.

The OSRD was a radical experiment. Instead of the military controlling weapons development Bush was now running an organization where civilian scientists designed and built advanced weapons systems. Nearly 10,000 scientists and engineers received draft deferments to work in these labs.

(Prior to World War 2, science in U.S. universities was primarily funded by companies interested in specific research projects. But funding for basic research came from two non-profits: The Rockefeller Foundation and the Carnegie Institution. In his role  as President of the Carnegie Institution Bush got to know (and fund!) every top university scientist in the U.S.)

As a harbinger of much bigger things, the NRDC uranium committee was enlarged and renamed the S-1 Section on Uranium.

Throughout the next year the pace of atomic research picked up. And Bush’s involvement in launching the U.S. nuclear weapons program would grow larger.

 By the middle of 1941 Bush was beginning to believe that building an atomic bomb was possible. But he felt he did not have enough evidence to suggest to the president that the country commit to the massive engineering effort to build the bomb.

Then the MAUD report from the British arrived.

The British Nuclear Weapons Program codenamed “Tube Alloys” and the MAUD Report

Meanwhile in the UK, British nuclear physicists had not only concluded that building an atomic bomb was feasible, but they had calculated the size of the industrial effort needed.In March 1940 scientists had told UK Prime Minister Winston Churchill that nuclear weapons could be built.

In June 1940 the UK formed the MAUD Committee to study the possibility of developing a nuclear weapon. A year later they had their answer: the July 1941 the MAUD Committee report, “Use of Uranium for a Bomb,” said that it was possible to build a bomb from uranium using gaseous diffusion on a massive scale to produce uranium-235. It kick-started the UK’s own nuclear weapons program called Tube Alloys. (Read the MAUD report here.)

They delivered their report to Vannevar Bush in July 1941. And it changed everything.

Bush is Convinced by the MAUD Report
The MAUD Report finally pushed Bush over the edge. The British report showed how it was possible to build an atomic bomb. The fact that the British were independently saying what passionate advocates like Lawrence, Fermi, et al were saying convinced Bush that an atomic bomb program was worth investing in at the scale needed.

For a short period of time in 1941 the UK was ahead of the U.S. in thinking about how to weaponize uranium, but British officials dithered on approaching the U.S. for a full nuclear partnership with the U.S. By mid 1942, when the British realized their industrial capacity was stretched too thin and they couldn’t build the uranium separation plants and Bomb alone during the War, the Manhattan Project was scaling up and the U.S. had no need for the UK.

The UK would play a minor role in the Manhattan project.

Bush Tells Roosevelt – We Can Build an Atomic Bomb
In October 1941, Bush told the President about the British MAUD report conclusions: the bomb’s uranium core might weigh twenty-five pounds, its explosive power might equal eighteen hundred tons of TNT, but to separate the U-235 they would need to build a massive industrial facility. The President asked Bush to work with the Army Corps of Engineers to figure out what type of plant to build, how to build it and how much would it cost.

A month later, in November 1941 the U.S. National Academy of Sciences confirmed to Bush that the British MAUD report conclusions were correct.

Bush now had all the pieces lined up to support an all-out effort to develop an atomic bomb.

December 1941 – Let’s Build an Atomic Bomb
In December 1941, the day before the Japanese attack on Pearl Harbor, the atomic bomb program was placed under Vannevar Bush. He renamed the Uranium program as the S-1 Committee of OSRD.

In addition to overseeing the 19 Divisions of OSRD, Bush’s new responsibility was to coordinate all the moving parts of the atomic bomb program – the research, the lab experiments, and now the beginning of construction contracts.

With the Presidents support, Bush reorganized the program to take it from research to a weapons program. The goal now was to find the best ways to produce uranium-235 and Plutonium in large quantities. He appointed Harold Urey at Columbia to lead the gaseous diffusion and centrifuge methods and heavy-water studies. Ernest Lawrence at Berkeley took electromagnetic and plutonium responsibilities, and Arthur Compton at Chicago ran chain reaction and weapons theory programs. This team proposed to begin building pilot plants for all five methods of separating U-235 before they were proven. Bush and Conant agreed and sent the plan to the President, Vice President, and Secretary of War, suggesting the Army Corps of Engineers build these plants.

With U.S. now at war with Germany and Japan, the race to build the bomb was on.

In January 1942, Compton made Oppenheimer responsible for fast neutron research at Berkeley. This very small part of the atomic bomb program is the first time Oppenheimer was formally engaged in atomic bomb work.

Enter the Army
The Army began attending OSRD S-1 (the Atomic Bomb group) meetings in March 1942. Bush told the President that by the summer of 1942 the Army should be authorized to build full-scale plants.

Build the U-235 Separation and Plutonium Plants
By May 1942 it was still unclear which U-235 separation method would work and what was the right way to build a nuclear reactor to make Plutonium, so the S-1 committee recommended – build all of them. Build centrifuge, electromagnetic separation, and gaseous diffusion plants as fast as possible; build a heavy water plant for the nuclear reactors as an alternative to graphite; build reactors to produce plutonium; and start planning for large-scale production and select the site(s).  The S-1 Committee also recommended the Army be in charge of building the plants.

Meanwhile that same month, Oppenheimer was made the “Coordinator of Rapid Rupture.” He headed up a group of theorists working with experimentalists to calculate how many pounds of U-235 and Plutonium were needed for a bomb.

The Manhattan Engineering District – The Atomic Program Moves to the Army
In June 1942, the president approved Bush’s plan to hand building the bomb over to the Army.  The Manhattan Engineering District became the new name for the U.S. atomic bomb program. General Groves was appointed its head in September 1942.

To everyone’s surprise Groves selected Oppenheimer to administer the program. It was a surprise because up until then Oppenheimer was a theoretical physicist, not an experimentalist nor had he ever run or managed any programs.

Grove and Oppenheimer decided that in addition to the massive production facilities – U-235 in Oak Ridge, TN, and Plutonium in Hanford, WA – they would need a central laboratory to design the bomb itself. This would become Los Alamos. And Oppenheimer would head that lab bringing together a diverse set of theorists, experimental physicists, explosive experts, chemistry, and metallurgists.

Bush, Conant and Grove at Plutonium production site at Hanford -July 1945

At its peak in mid-1944 130,000 people were working on the Manhattan Project; 5,000 of them worked at Los Alamos.

Vannevar Bush would be present at the test of the Plutonium weapon at the Trinity test site in July 1945.

The OSRD would be the organization that made the U.S. the leader in 20th century research. At the end of World War II, Bush laid out his vision for future U.S. support of research in an article called “Science the Endless Frontier.” OSRD was disbanded in 1947, but after a long debate it was resurrected in pieces. Out of it came the National Science Foundation, the National Institute of Health, the Atomic Energy Commission and ultimately NASA and DARPA – all would all spring from its roots.

50 years before it happened Bush would describe what would become the internet in a 1945 article called As We May Think.

Summary

  • By the time Oppenheimer and Grove took over the Atomic Bomb program, Vannevar Bush had been running it for two years
  • The U.S. atomic bomb program was the sum of multiple small decisions guided by OSRD and a Presidential science advisor – Vannevar Bush
  • Bush’s organizations kick-started the program. The NDRC invested (in 2023 dollars) $10M in nuclear research, OSRD put in another $250M for nuclear experiments
  • The Manhattan project would ultimately cost ~$40 billion to build the two bombs.
  • As the country was in a crisis – decisions were made in days/weeks by small groups with the authority to move with speed and urgency.
  • Large-scale federal funding for science research in U.S. universities started with the Office of Scientific Research and Development (OSRD) – more to come in subsequent posts

Read all the Secret History posts here


Lean Meets Wicked Problems

This post previously appeared in Poets & Quants.

I just spent a month and a half at Imperial College London co-teaching a “Wicked” Entrepreneurship class. In this case Wicked doesn’t mean morally evil, but refers to really complex problems, ones with multiple moving parts, where the solution isn’t obvious. (Understanding and solving homelessness, disinformation, climate change mitigation or an insurgency are examples of wicked problems. Companies also face Wicked problems. In contrast, designing AI-driven enterprise software or building dating apps are comparatively simple problems.)


I’ve known Professor Cristobal Garcia since 2010 when he hosted my first visit to Catholic University in Santiago of Chile and to southern Patagonia. Now at Imperial College Business School and Co-Founder of the Wicked Acceleration Labs, Cristobal and I wondered if we could combine the tenets of Lean (get out of the building, build MVPs, run experiments, move with speed and urgency) with the expanded toolset developed by researchers who work on Wicked problems and Systems’ Thinking.

Our goal was to see if we could get students to stop admiring problems and work rapidly on solving them. As Wicked and Lean seem to be mutually exclusive, this was a pretty audacious undertaking.

This five-week class was going to be our MVP.

Here’s what happened.

Finding The Problems
Professor Garcia scoured the world to find eight Wicked/complex problems for students to work on. He presented to organizations in the Netherlands, Chile, Spain, the UK (Ministry of Defense and the BBC), and aerospace companies. The end result was a truly ambitious, unique, and international set of curated Wicked problems.

  • Increasing security and prosperity amid the Mapuche conflict in Araucania region of Chile
  • Enabling and accelerating a Green Hydrogen economy
  • Turning the Basque Country in Spain into an AI hub
  • Solving Disinformation/Information Pollution for the BBC
  • Creating Blue Carbon projects for the UK Ministry of Defense
  • Improving patient outcomes for Ukrainian battlefield injuries
  • Imagining the future of a low-earth-orbit space economy
  • Creating a modular architecture for future UK defense ships

Recruiting the Students
With the problems in hand, we set about recruiting students from both Imperial College’s business school and the Royal College of Art’s design and engineering programs.

We held an info session explaining the problems and the unique parts of the class. We were going to share with them a “Swiss Army Knife” of traditional tools to understand Wicked/Complex problems, but they were not going to research these problems in the library. Instead, using the elements of Lean methodology, they were going to get out of the building and observe the problems first-hand. And instead of passively observing them, they were going to build and test MVPs.  All in six weeks.

50 students signed up to work on the eight problems with different degrees of “wickedness”.

Imperial Wicked Problems and Systems Thinking – 2023 Class

The Class
The pedagogy of the class (our teaching methods and the learning activities) were similar to all the Lean/I-Corps and Hacking for Defense classes we’ve previously taught. This meant the class was team-based, Lean-driven (hypothesis testing/business model/customer development/agile engineering) and experiential – where the students, rather than being presented with all of the essential information, must discover that information rapidly for themselves.

The teams were going to get out of the building and talk to 10 stakeholder a week. Then weekly each team will present 1) here’s what we thought, 2) here’s what we did, 3) here’s what we learned, 4) here’s what we’re going to do during this week.

More Tools
The key difference between this class and previous Lean/I-Corps and Hacking for Defense classes was that Wicked problems required more than just a business model or mission model to grasp the problem and map the solution. Here, to get a handle on the complexity of their problem the students needed a suite of tools –  Stakeholder Maps, Systems Maps, Assumptions Mapping, Experimentation Menus, Unintended Consequences Map, and finally Dr. Garcia’s derivative of the Alexander Osterwalder’s Business Model Canvas – the Wicked Canvas – which added the concept of unintended consequences and the “sub-problems” according to the different stakeholders’ perspectives to the traditional canvas.

During the class the teaching team offered explanations of each tool, but the teams got a firmer grasp on Wicked tools from a guest lecture by Professor Terry Irwin, Director of the Transition Design Institute at Carnegie Mellon (see her presentation here.) Throughout the class teams had the flexibility to select the tools they felt appropriate to rapidly gain an holistic understanding and yet to develop a minimum viable product to address and experiment with each of the wicked problems.

Class Flow
Week 1 

  • What is a simple idea? What are big ideas and Impact Hypotheses? 
    • Characteristics of each. Rewards, CEO, team, complexity, end point, etc. 
  • What is unique about Wicked Problems?
    • Beyond TAM and SAM (“back of the napkin”) for Wicked Problems
  • You need Big Ideas to tackle Wicked Problems: but who does it?
    •  Startups vs. Large Companies vs. Governments
    • Innovation at Speed for Horizon 1, 2 and 3 (Managing the Portfolio across Horizons)
  • What is Systems Thinking?
  • How to map stakeholders and systems’ dynamics?
  • Customer & Stakeholder Discovery: getting outside the building, city and country: why and how? 

Mapping the Problem(s), Stakeholders and Systems –  Wicked Tools

Week 2

  • Teams present for 6 min and receive 4 mins feedback
  • The Wicked Swiss Army Knife for the week: Mapping Assumptions Matrix, unintended consequences and how to run and design experiments
  • Prof Erkko Autio (ICBS and Wicked Labs) on AI Ecosystems and Prof Peter Palensky (TU Delft) on Smart Grids, Decarbornization and Green Hydrogen
  • Lecture on Minimal Viable Products (MVPs) and Experiments
  • Homework: getting outside the building & the country to run experiments

Assumption Mapping and Experimentation Type –  Wicked Tools

Week 3

  • Teams present in 6 min and receive 4 mins feedback
  • The Wicked Swiss Army Knife for the week: from problem to solution via “How Might We…” Builder and further initial solution experimentation
  • On Canvases: What, Why and How 
  • The Wicked Canvas 
  • Next Steps and Homework: continue running experiments with MVPs and start validating your business/mission/wicked canvas

The Wicked Canvas –  Wicked Tools

Experimentation Design and How We Might… –  Wicked Tools

Week 4

  • Teams present in 6 min and receive 5 mins feedback
  • Wicked Business Models – validating all building blocks
  • The Geography of Innovation – the milieu, creative cities & prosperous regions 
  • How World War II and the UK Started Silicon Valley
  • The Wicked Swiss Tool-  maps for acupuncture in the territory
  • Storytelling & Pitching 
  • Homework: Validated MVP & Lessons learned

Acupuncture Map for Regional System Intervention  – Wicked Tools


Week 5

  • Teams presented their Final Lessons Learned journey – Validated MVP, Insights & Hindsight (see the presentations at the end of the post.)
    • What did we understand about the problem on day 1?
    • What do we now understand?
    • How did we get here?
    • What solutions would we propose now?
    • What did we learn?
    • Reflections on the Wicked Tools

Results
To be honest, I wasn’t sure what to expect. We pushed the students way past what they have done in other classes. In spite of what we said in the info session and syllabus, many students were in shock when they realized that they couldn’t take the class by just showing up, and heard in no uncertain terms that no stakeholder/customer interviews in week 1 was unacceptable.

Yet, everyone got the message pretty quickly. The team working on the Mapuche conflict in the Araucania region of Chile, flew to Chile from London, interviewed multiple stakeholders and were back in time for next week’s class. The team working to turn the Basque Country in Spain into an AI hub did the same – they flew to Bilbao and interviewed several stakeholders. The team working on the Green Hydrogen got connected to the Rotterdam ecosystem and key stakeholders in the Port, energy incumbents, VCs and Tech Universities. The team working on Ukraine did not fly there for obvious reasons. The rest of the teams spread out across the UK – all of them furiously mapping stakeholders, assumptions, systems, etc., while proposing minimal viable solutions. By the end of the class it was a whirlwind of activity as students not only presented their progress but saw that of their peers. No one wanted to be left behind. They all moved with speed and alacrity.

Lessons Learned

  • Our conclusion? While this class is not a substitute for a years-long deep analysis of Wicked/complex problems it gave students:
    • a practical hands-on introduction to tools to map, sense, understand and potentially solve Wicked Problems
    • the confidence and tools to stop admiring problems and work on solving them
  • I think we’ll teach it again.

Team final presentations

The team’s final lessons learned presentations were pretty extraordinary, only matched by their post-class comments. Take a look below.

Team Wicked Araucania

Click here if you can’t see the Araucania presentation.

Team Accelerate Basque

Click here if you can’t see the Accelerate Basque presentation.

Team Green Hydrogen

Click here if you can’t see the Green Hydrogen presentation.

Team Into The Blue

Click here if you can’t see the Team Blue presentation.

Team Information Pollution

Click here if you can’t see the Team Information Pollution presentation.

Team Ukraine

Click here if you can’t see the Team Ukraine presentation.

Team Wicked Space

Click here if you can’t see the Team Wicked Space presentation.

Team Future Proof the Navy

Click here if you can’t see the Future Proof the Navy presentation.



Reorganizing the DoD to Deter China and Win in the Ukraine – A Road Map for Congress

This article previously appeared in Defense News. It was co-written with Joe Felter, and Pete Newell.

Today, the U.S. is supporting a proxy war with Russia while simultaneously attempting to deter a China cross-strait invasion of Taiwan. Both are wakeup calls that victory and deterrence in modern war will be determined by a state’s ability to both use traditional weapons systems and simultaneously rapidly acquire, deploy, and integrate commercial technologies (drones, satellites, targeting software, et al) into operations at every level.

Ukraine’s military is not burdened with the DoD’s 65-year-old acquisition process and 20th-century operational concepts. It is learning and adapting on the fly. China has made the leap to a “whole of nation” approach. This has allowed the Peoples Liberation Army (PLA) to integrate private capital and commercial technology and use them as a force multiplier to dominate the South China Sea and prepare for a cross-strait invasion of Taiwan.

The DoD has not done either of these. It is currently organized and oriented to execute traditional weapons systems and operational concepts with its traditional vendors and research centers but is woefully unprepared to integrate commercial technologies and private capital at scale.

Copying SecDef Ash Carter’s 2015 strategy, China has been engaged in Civil/Military Fusion employing a whole of government coordinated effort to harness these disruptive commercial technologies for its national security needs. To fuel the development of technologies critical for defense, China has tapped into $900 billion of private capital in Civil/Military Guidance (Investment) Funds and has taken public state owned enterprises to fund their new shipyards, aircraft, and avionics.  Worse, China will learn from and apply the lessons from Russia’s failures in the Ukraine at an ever increasing pace.

But unlike America’s arch strategic rival, the US to date has been unwilling and unable to adapt and adopt new models of systems and operational concepts at the speed of our adversaries. These include attritable systems, autonomous systems, swarms, and other emerging new defense platforms threaten legacy systems, incumbent vendors, organizations, and cultures. (Until today, the U.S. effort was still-born with its half-hearted support of its own Defense Innovation Unit and history of lost capabilities like those that were inherent the US Army’s Rapid Equipping Force.)

Viewing the DoD budget as a zero-sum game has turned the major defense primes and K-street lobbyists into saboteurs for DoD organizational innovation that threaten their business models. Using private capital could be a force multiplier by adding 100’s of billions of dollars outside the DoD budget. Today, private capital is disincented to participate in national security and incentives are aligned to ensure the U.S. military is organized and configured to fight and win the wars of the last century.  The U.S. is on a collision course to experience catastrophic failure in a future conflict because of it. Only Congress can alter this equation.

For the U.S. to deter and prevail against China the DoD must create both a strategy and a redesigned organization to embrace those untapped external resources – private capital and commercial innovation. Currently the DoD lacks a coherent plan and an organization with the budget and authority to do so.

A reorganized and refocused DoD could acquire traditional weapons systems while simultaneously rapidly acquiring, deploying, and integrating commercial technologies. It would create a national industrial policy that incentivizes the development of 21st-century shipyards, drone and satellite factories and a new industrial base along the lines of the CHIPS and Innovation and Competition acts.

Congress must act to identify and implement changes within the DoD needed to optimize its organization and structure. These include:

  1. Create a new defense ecosystem that uses the external commercial innovation ecosystem and private capital as a force multiplier. Leverage the expertise of prime contractors as integrators of advanced technology and complex systems, refocus Federally Funded Research and Development Centers (FFRDCs) on areas not covered by commercial tech (kinetics, energetics, nuclear and hypersonics).
  2. Reorganize DoD Research and Engineering. Allocate its budget and resources equally between traditional sources of innovation and new commercial sources of innovation and capital. Split the OSD R&E organization in half. Keep the current organization focused on the status quo. Create a peer organization – the Under Secretary of Defense for Commercial Innovation and Private Capital.
  3. Scale up the new Office of Strategic Capital (OSC) and the Defense Innovation Unit (DIU) to be the lead agencies in this new organization. Give them the budget and authority to do so and provide the services the means to do the same.
  4. Reorganize DoD Acquisition and Sustainment. Allocate its budget and resources equally between traditional sources of production and the creation of new from 21st-century arsenals – new shipyards, drone manufacturers, etc. – that can make 1,000s of low-cost, attritable systems.
  5. Coordinate with Allies. Expand the National Security Innovation Base (NSIB) to an Allied Security Innovation Base. Source commercial technology from allies.

Why Is It Up To Congress?

National power is ephemeral. Nations decline when they lose allies, economic power, interest in global affairs, experience internal/civil conflicts, or miss disruptive technology transitions and new operational concepts.

The case can be made that all of these have or are happening to the U.S.

There is historical precedent for Congressional action to ensure the DoD is organized to fight and win our wars. The 1986 Goldwater/Nichols Act laid the foundation for conducting coordinated and effective joint operations by reorganizing the roles of the military services, and the Joint Chiefs, and creating the Joint Staff and the combatant commands. US Congress must take Ukraine and China’s dominance in the South China Sea as call for action and immediately establish a commission to determine what reforms and changes are needed to ensure the U.S. can fight and win our future wars.

While parts of the DoD understand we’re in a crisis to deter, or if that fails, win a war in the South China Sea, the DoD as a whole shows little urgency and misses a crucial point: China will not defer solving the Taiwan issue on our schedule. Russia will not defer its future plans for aggression to meet our dates.  We need to act now.

We fail to do so at our peril and the peril of all those who depend on U.S. security to survive.

Playing With Fire – ChatGPT

The world is very different now. For man holds in his mortal hands the power to abolish all forms of human poverty and all forms of human life.

John F. Kennedy

Humans have mastered lots of things that have transformed our lives, created our civilizations, and might ultimately kill us all. This year we’ve invented one more.


Artificial Intelligence has been the technology right around the corner for at least 50 years. Last year a set of specific AI apps caught everyone’s attention as AI finally crossed from the era of niche applications to the delivery of transformative and useful tools – Dall-E for creating images from text prompts, Github Copilot as a pair programming assistant, AlphaFold to calculate the shape of proteins, and ChatGPT 3.5 as an intelligent chatbot. These applications were seen as the beginning of what most assumed would be domain-specific tools. Most people (including me) believed that the next versions of these and other AI applications and tools would be incremental improvements.

We were very, very wrong.

This year with the introduction of ChatGPT-4 we may have seen the invention of something with the equivalent impact on society of explosives, mass communication, computers, recombinant DNA/CRISPR and nuclear weapons – all rolled into one application. If you haven’t played with ChatGPT-4, stop and spend a few minutes to do so here. Seriously.

At first blush ChatGPT is an extremely smart conversationalist (and homework writer and test taker). However, this the first time ever that a software program has become human-competitive at multiple general tasks. (Look at the links and realize there’s no going back.) This level of performance was completely unexpected. Even by its creators.

In addition to its outstanding performance on what it was designed to do, what has surprised researchers about ChatGPT is its emergent behaviors. That’s a fancy term that means “we didn’t build it to do that and have no idea how it knows how to do that.” These are behaviors that weren’t present in the small AI models that came before but are now appearing in large models like GPT-4. (Researchers believe this tipping point is result of the complex interactions between the neural network architecture and the massive amounts of training data it has been exposed to – essentially everything that was on the Internet as of September 2021.)

(Another troubling potential of ChatGPT is its ability to manipulate people into beliefs that aren’t true. While ChatGPT “sounds really smart,” at times it simply makes up things and it can convince you of something even when the facts aren’t correct. We’ve seen this effect in social media when it was people who were manipulating beliefs. We can’t predict where an AI with emergent behaviors may decide to take these conservations.)

But that’s not all.

Opening Pandora’s Box
Until now ChatGPT was confined to a chat box that a user interacted with. But OpenAI (the company that developed ChatGPT) is letting ChatGPT reach out and interact with other applications through an API (an Application Programming Interface.)  On the business side that turns the product from an incredibly powerful application into an even more incredibly powerful platform that other software developers can plug into and build upon.

By exposing ChatGPT to a wider range of input and feedback through an API, developers and users are almost guaranteed to uncover new capabilities or applications for the model that were not initially anticipated. (The notion of an app being able to request more data and write code itself to do that is a bit sobering. This will almost certainly lead to even more new unexpected and emergent behaviors.) Some of these applications will create new industries and new jobs. Some will obsolete existing industries and jobs. And much like the invention of fire, explosives, mass communication, computing, recombinant DNA/CRISPR and nuclear weapons, the actual consequences are unknown.

Should you care? Should you worry?
First, you should definitely care.

Over the last 50 years I’ve been lucky enough to have been present at the creation of the first microprocessors, the first personal computers, and the first enterprise web applications. I’ve lived through the revolutions in telecom, life sciences, social media, etc., and watched as new industries, markets and customers created literally overnight. With ChatGPT I might be seeing one more.

One of the problems about disruptive technology is that disruption doesn’t come with a memo. History is replete with journalists writing about it and not recognizing it (e.g. the NY Times putting the invention of the transistor on page 46) or others not understanding what they were seeing (e.g. Xerox executives ignoring the invention of the modern personal computer with a graphical user interface and networking in their own Palo Alto Research Center). Most people have stared into the face of massive disruption and failed to recognize it because to them, it looked like a toy.

Others look at the same technology and recognize at that instant the world will no longer be the same (e.g. Steve Jobs at Xerox). It might be a toy today, but they grasp what inevitably will happen when that technology scales, gets further refined and has tens of thousands of creative people building applications on top of it – they realize right then that the world has changed.

It’s likely we are seeing this here. Some will get ChatGPT’s importance instantly. Others will not.

Perhaps We Should Take A Deep Breath And Think About This?
A few people are concerned about the consequences of ChatGPT and other AGI-like applications and believe we are about to cross the Rubicon – a point of no return. They’ve suggested a 6-month moratorium on training AI systems more powerful than ChatGPT-4. Others find that idea laughable.

There is a long history of scientists concerned about what they’ve unleashed. In the U.S. scientists who worked on the development of the atomic bomb proposed civilian control of nuclear weapons. Post WWII in 1946 the U.S. government seriously considered international control over the development of nuclear weapons. And until recently most nations agreed to a treaty on the nonproliferation of nuclear weapons.

In 1974, molecular biologists were alarmed when they realized that newly discovered genetic editing tools (recombinant DNA technology) could put tumor-causing genes inside of E. Coli bacteria. There was concern that without any recognition of biohazards and without agreed-upon best practices for biosafety, there was a real danger of accidentally creating and unleashing something with dire consequences. They asked for a voluntary moratorium on recombinant DNA experiments until they could agree on best practices in labs. In 1975, the U.S. National Academy of Science sponsored what is known as the Asilomar Conference. Here biologists came up with guidelines for lab safety containment levels depending on the type of experiments, as well as a list of prohibited experiments (cloning things that could be harmful to humans, plants and animals).

Until recently these rules have kept most biological lab accidents under control.

Nuclear weapons and genetic engineering had advocates for unlimited experimentation and unfettered controls. “Let the science go where it will.”  Yet even these minimal controls have kept the world safe for 75 years from potential catastrophes.

Goldman Sachs economists predict that 300 million jobs could be affected by the latest wave of AI. Other economists are just realizing the ripple effect that this technology will have. Simultaneously, new startups are forming, and venture capital is already pouring money into the field at an outstanding rate that will only accelerate the impact of this generation of AI. Intellectual property lawyers are already arguing who owns the data these AI models are built on. Governments and military organizations are coming to grips with the impact that this technology will have across Diplomatic, Information, Military and Economic spheres.

Now that the genie is out of the bottle, it’s not unreasonable to ask that AI researchers take 6 months and follow the model that other thoughtful and concerned scientists did in the past. (Stanford took down its version of ChatGPT over safety concerns.) Guidelines for use of this tech should be drawn up, perhaps paralleling the ones for genetic editing experiments – with Risk Assessments for the type of experiments and Biosafety Containment Levels that match the risk.

Unlike moratoriums of atomic weapons and genetic engineering that were driven by the concern of research scientists without a profit motive, the continued expansion and funding of generative AI is driven by for-profit companies and venture capital.

Welcome to our brave new world.

Lessons Learned

  • Pay attention and hang on
  • We’re in for a bumpy ride
  • We need an Asilomar Conference for AI
  • For-profit companies and VC’s are interested in accelerating the pace