Join Jerry Engel, Pete Newell, and Steve Weinstein for the sixth edition of the Lean Innovation Educators Summit December 14, 1-4 pm Eastern Time, 10 am-1 pm Pacific Time. Register here.
—
This virtual gathering will bring together entrepreneurship educators from around the world who are putting Lean Innovation to work in their classrooms, accelerators, venture studios, and student-driven ventures.
The summit topic is “Education and Innovation in the Age of Chaos and Disruption.”
Our students will be facing the challenges of a world that’s rapidly changing, chaotic and uncertain. A world undergoing climate change, supply chain disruptions, political instability and continual technology innovation and disruption. It’s incumbent on us as educators to provide the next generation of innovators with the tools and mindset to meet these challenges.
Among the questions we’ll address in this short summit:
How do we as entrepreneurship and innovation educators best prepare the next generation?
What role should our institutions help us do this?
What are the other systems and partnerships that we need to take advantage of?
We will have concurrent breakout sessions so participants have the opportunity to choose their own path to explore. We’ll then going to pivot to hear from colleagues across three broad categories of innovation:
Curriculum – We’ll discuss how best to equip educators with the tools they need to cultivate and guide student teams around solving mission-driven problems.
Ecosystems – We’ll explore partnerships that engage and inform positive student engagement and outcomes and how to support diversity of thought, background.
Trends – The rate of technological disruption shows no sign of slowing down. Climate change was a hypothesis for our generation but will be the facts on the ground for our students. The struggle between great powers and a fluid global landscape will accelerate. All of these will shape what future curriculums our students need and educators must deliver.
Alexander Osterwalder, the creator of the business model canvas and Strategyzer co-founder will join the discussion about the intersection of education, innovation and entrepreneurship
During the breakout sessions, you will have the opportunity to contribute to the conversation via Chat, Q&A, and an online community bulletin board. We will close out the Summit with Alex Osterwalder’s fireside chat moderated by Dr. Jerry Engel.
How to register
When you register, you will receive a link to an online collaboration space where you can submit questions, challenges and feedback. This feedback will inform the content of the presentations, post-event white papers, and the curriculum delivered to our educator community.
This session is free but limited to Innovation educators. Register here and learn more on our website: We look forward to gathering as a community of educators to shape the future of Lean Innovation Education.
Here’s their introduction to the key concepts inside the playbook.
Over 75% of executives report that innovation is a top three priority at their companies. However, only 20% of executives indicate that their companies are ready to innovate at scale. This is the challenge for contemporary organizations: How to develop a world-class ecosystem that can drive repeatable innovation at scale.
The playbook describes the three pillars of corporate innovation: Innovation Portfolios, Innovation Programs and a Culture of Innovation. Under each pillar, the playbook describes three questions that leaders and teams can ask to evaluate whether their company has the right innovation ecosystem in place.
Innovation Portfolio: what are your company’s portfolio of innovation projects?
Are your company’s innovation efforts exploring or exploiting business modes?
Does your company have a balanced portfolio of projects that cover efficiency, sustaining and transformative innovation?
What is the health of your innovation funnel or pipeline?
Explore: Search for new value propositions and business models by designing and testing new business ideas rather than execution.
Exploit: Manage existing business models by scaling emerging businesses, renovating declining ones and protecting the successful ones.
Innovation Programs: how are your company’s innovation programs are structured and managed.
Do your leaders get excited about the wrong innovation programs?
What results are your innovation programs producing?
Are your company’s innovation programs interconnected in a strategic way?
To close the innovation capability gap, companies can evaluate their innovation programs by asking whether they’reinnovation theater or producing tangible results for the company.
Value Creation: Creating new products, services, value propositions and business models. These programs invest in and manage innovation projects that create value by producing new growth or cost savings.
Culture Change: Transforming the company to establish an innovation culture. This may include new processes, metrics, incentive systems, or changing organizational structures. These transformations help the company innovate in a consistent and repeatable way.
Innovation Culture: What are the blockers and enablers of innovation in your company –
How much time does your leadership spend on innovation?
Where does innovation live in your organization and how much power does it have?
What is your kill rate for innovation projects?
To overcome the innovation capability gap, companies need to create a culture that enables the right behaviors to produce world-class innovative outcomes. A reliable indicator of the quality of your innovation culture is how innovation teams would describe it. Is it a culture that is dominated by blockers of innovation or enablers of innovation?
Leadership Support: How can corporate leaders have the biggest impact on innovation in terms of time spent, strategic guidance, and resource allocation.
Organizational Design: How to give innovation legitimacy and power, the right incentives, and clear policies for collaboration with the core business.
Innovation Practice: How to develop people’s innovation skills and experience and acquire the right innovation talent. How to ensure that we are using the right tools, processes, and metrics to test and adapt ideas in order to reduce risk.
I spent last week at a global Fortune 50 company offsite watching them grapple with disruption. This 100+-year-old company has seven major product divisions, each with hundreds of products. Currently a market leader, they’re watching a new and relentless competitor with more money, more people and more advanced technology appear seemingly out of nowhere, attempting to grab customers and gain market share.
This company was so serious about dealing with this threat (they described it as “existential to their survival”) that they had mobilized the entire corporation to come up with new solutions. This wasn’t a small undertaking, because the threats were coming from multiple areas in multiple dimensions; How do they embrace new technologies? How do they convert existing manufacturing plants (and their workforce) for a completely new set of technologies? How do they bring on new supply chains? How do they become present on new social media and communications channels? How do they connect with a new generation of customers who had no brand loyalty? How to they use the new distribution channels competitors have adopted? How do they make these transitions without alienating and losing their existing customers, distribution channels and partners? And how do they motivate their most important asset – their people – to operate with speed, urgency, and passion?
The company believed they had a handful of years to solve these problems before their decline would become irreversible. This meeting was a biannual gathering of all the leadership involved in the corporate-wide initiatives to out-innovate their new disruptors. They called it the “Tsunami Initiative” to emphasize they were fighting the tidal wave of creative destruction engulfing their industry.
To succeed they realized this isn’t simply coming up with one new product. It meant pivoting an entire company – and its culture. The scale of solutions needed dwarf anything a single startup would be working on.
The company had hired a leading management consulting firm that helped them select 15 critical areas of change the Tsunami Initiative was tasked to work on. My hosts, John and Avika, at the offsite were the co-leads overseeing the 15 topic areas. The consulting firm suggested that they organize these 15 topic areas as a matrix organization, and the ballroom was filled with several hundred people from across their company – action groups and subgroups with people from across the company: engineering, manufacturing, market analysis and collection, distribution channels, and sales. Some of the teams even included some of their close partners. Over a thousand more were working on the projects in offices scattered across the globe.
John and Avika had invited me to look at their innovation process and offer some suggestions.
Are these the real problems? This was one of the best organized innovation initiatives I have seen. All 15 topic had team leads presenting poster sessions, there were presenters from the field sales and partners emphasizing the urgency and specificity of the problems, and there were breakout sessions where the topic area teams brainstormed with each other. After the end of the day people gathered around the firepit for informal conversations. It was a testament to John and Avika’s leadership that even off duty people were passionately debating how to solve these problems. It was an amazing display of organizational esprit de corps.
While the subject of each of the 15 topic areas had been suggested by the consulting firm, it was in conjunction with the company’s corporate strategy group, and the people who generated these topic area requirements were part of the offsite. Not only were the requirements people in attendance but so was a transition team to facilitate the delivery of the products from these topic teams into production and sales.
However, I noticed that several of the requirements from corporate strategy seemed to be priorities given to them from others (e.g. here are the problems the CFO or CEO or board thinks we ought to work on) or likely here are the topics the consulting firm thought they should focus on) and/or were from subject matter experts (e.g. I’m the expert in this field. No need to talk to anyone else; here’s what we need). It appeared the corporate strategy group was delivering problems as fixed requirements, e.g. deliver these specific features and functions the solution ought to provide.
Here was a major effort involving lots of people but missing the chance to get the root cause of the problems.
I told John and Avika that I understood some requirements were known and immutable. However, when all of the requirements are handed to the action teams this way the assumption is that the problems have been validated, and the teams do not need to do any further exploration of the problem space themselves.
Those tight bounds on requirements constrain the ability of the topic area action teams to:
Deeply understand the problems – who are the customers, internal stakeholders (sales, other departments) and beneficiaries (shareholders, etc.)? How to adjudicate between them, priority of the solution, timing of the solutions, minimum feature set, dependencies, etc.
Figure out whether the problem is a symptom of something more important
Understand whether the problem is immediately solvable, requires multiple minimum viable products to test several solutions, or needs more R&D
I noticed that with all of the requirements fixed upfront, instead of having a freedom to innovate, the topic area action teams had become extensions of existing product development groups. They were getting trapped into existing mindsets and were likely producing far less than they were capable of. This is a common mistake corporate innovation teams tend to make.
I reminded them that when team members get out of their buildings and comfort zones, and directly talk to, observe, and interact with the customers, stakeholders and beneficiaries, it allows them to be agile, and the solutions they deliver will be needed, timely, relevant and take less time and resources to develop. It’s the difference between admiring a problem and solving one.
As I mentioned this, I realized having all fixed requirements is a symptom of something else more interesting – how the topic leads and team members were organized. From where I sat, it seemed there was a lack of a common framework and process.
Give the Topic Areas a Common Framework I asked John and Avika if they had considered offering the topic action team leaders and their team members a simple conceptual framework (one picture) and common language. I suggested this would allow the teams to know when and how to “ideate” and incorporate innovative ideas that accelerate better outcomes. The framework would use the initial corporate strategy requirements as a starting point rather than a fixed destination. See the diagram.
I drew them a simple chart and explained that most problems start in the bottom right box.
These are “unvalidated” problems. Teams would use a customer discovery process to validate them. (At times some problems might require more R&D before they can be solved.) Once the problems are validated, teams move to the box on the bottom left and explore multiple solutions. Both boxes on the bottom are where ideation and innovation-type of problem/solution brainstorming are critical. At times this can be accelerated by bringing in the horizon 3, out-of-the-box thinkers that every company has, and let them lend their critical eye to the problem/solution.
If a solution is found and solves the problem, the team heads up to the box on the top left.
But I explained that very often the solution is unknown. In that case think about having the teams do a “technical terrain walk.” This is the process of describing the problem to multiple sources (vendors, internal developers, other internal programs) debriefing on the sum of what was found. A terrain walk often discovers that the problem is actually a symptom of another problem or that the sources see it as a different version of the problem. Or that an existing solution already exists or can be modified to fit.
But often, no existing solution exists. In this case, teams could head to the box on the top right and build Minimal Viable Products – the smallest feature set to test with customers and partners. This MVP testing often results in new learnings from the customers, beneficiaries, and stakeholders – for example, they may tell the topic developer that the first 20% of the deliverable is “good enough” or the problem has changed, or the timing has changed, or it needs to be compatible with something else, etc. Finally, when a solution is wanted by customers/beneficiaries/stakeholders and is technically feasible, then the teams move to the box on the top left.
The result of this would be teams rapidly iterating to deliver solutions wanted and needed by customers within the limited time the company had left.
Creative destruction Those companies that make it do so with an integrated effort of inspired and visionary leadership, motivated people, innovative products, and relentless execution and passion.
Watching and listening to hundreds of people fighting the tsunami in a legendary company was humbling.
I hope they make it.
Lessons Learned
Creative destruction and disruption will happen to every company. How will you respond?
Topic action teams need to deeply understand the problems as the customer understands them, not just what the corporate strategy requirements dictate
This can’t be done without talking directly to the customers, internal stakeholders, and partners
Consider if the corporate strategy team should be more facilitators than gatekeepers
A light-weight way to keep topic teams in sync with corporate strategy is to offer a common innovation language and problem and solution framework
A journey of a thousand miles begins with a single step
Lǎozi 老子
I just had lunch with Shenwei, one of my ex-students who had just taken a job in a mid-sized consulting firm. After a bit of catching up I offered he was looking a bit lost. “I just got handed a project to help our firm enter a new industry – semiconductors. They want me to map out the space so we can figure out where we can add value.
When I asked what they already knew about it, they tossed me a tall stack of industry and stock analyst reports, company names, web sites, blogs. I started reading through a bunch of it and I’m drowning in data but don’t know where to start. I feel like I don’t know a thing.”
I told Shenwei I was happy for him because he had just been handed an awesome learning opportunity – how to rapidly understand and then map any new market. He gave me a “easy for you to say” look, but before he could object I handed him a pen and a napkin and asked him to write down the names of companies and concepts he read about that have anything to do with the semiconductor business – in 30 seconds. He quickly came up with a list with 9 names/terms. (See Mapping – First Pass)
“Great, now we have a start. Now give me a few words that describe what they do, or mean, or what you don’t know about them.”
Don’t let the enormity of unknowns frighten you. Start with what you do know.
After a few minutes he came up with a napkin sketch that looked like the picture in Mapping – Second Pass. Now we had some progress.
I pointed out he now had a starter list that not only contained companies but the beginning of a map of the relationships between those companies. And while he had a few facts, others were hypotheses and concepts. And he had a ton of unanswered questions.
We spent the next 20 minutes deconstructing that sketch and mapping out the Second Pass list as a diagram (see Mapping – Third Pass.)
As you keep reading more materials, you’ll have more questions than facts. Your goal is to first turn the questions into testable hypotheses (guesses). Then see if you can find data that turns the hypotheses into facts. For a while the questions will start accumulating faster than the facts. That’s OK.
Note that even with just the sparse set of information Shenwei had, in the bottom right-hand corner of his third mapping pass, a relationship diagram of the semiconductor industry was beginning to emerge.
Drawing a diagram of the relationships of companies in an industry can help you deeply understand how the industry works and who the key players are. Start building one immediately. As you find you can’t fill in all the relationships, the gaps outlining what you need to learn will become immediately visible.
As the information fog was beginning to lift, I could see Shenwei’s confidence returning. I pointed out that he had a real advantage that his assignment was in a known industry with lots of available information. He quickly realized that he could keep adding information to the columns in the third mapping pass as he read through the reports and web sites.
Google and Google Scholar are your best friends. As you discover new information increase your search terms.
My suggestion was to use the diagram in the third mapping pass as the beginning of a wall chart – either physically (or virtually if he could keep it in all in his head). And every time he learned more about the industry to update the relationship diagram of the industry and its segments. (When he pointed out that there were existing diagrams of the semiconductor industry he could copy, I suggested that he ignore them. The goal was for him to understand the industry well enough that he could draw his own map ab initio – from the beginning. And if he did so, he might create a much better one.)
When lunch was over Shenwei asked if it was OK if he checked in with me as he learned new things and I agreed. What he didn’t know was that this was only the first step in a ten-step industry mapping process.
Epilog
Over the next few weeks Shenwei shared what he had learned and sent me his increasingly refined and updated industry relationship map. (The 4th mapping pass showed up 48 hours later.)In exchange I shared with him the news that he was on step one of a ten step industry mapping program. Other the next few weeks he quickly built on the industry map to answer questions 2 through 10 below.
Two weeks later he handed his leadership an industry report that covered the ten steps below and contained a sophisticated industry diagram he created from scratch. A far cry from his original napkin sketch!
Six months later his work on this project convinced his company that there was a large opportunity in the semiconductor space, and they started a new practice with him in it. His work won him the “best new employee” award.
The Ten Steps to Map any Industry
Start by continuously refining your understanding of the industry by diagramming it. List all the new words you encounter and create a glossary in your own words. Start collecting the best sources of information you’ve read.
Basic Industry Understanding
Diagram the industry and its segments
Start with anything
Build your learning by successive iteration
Who are the key suppliers to each segment?
How does this industry feed into the larger economy?
Create a glossary of industry unique terms
Can you explain them to others? Are there analogies to other markets?
Who are the industry experts in each segment? For the entire industry?
Economic experts? E.g. industry analysts, universities, think tanks
Technology experts? E.g. universities, think tanks
Geographic experts?
Key Conferences, blogs, web sites, etc.
What are the best opensource data feeds?
What are the best paid resources?
Overlay numbers, dollars, market share, Compound Annual Growth Rate (CAGR) on all parts of the industry diagram. That will inform velocity and direction of the market.
Detailed Industry Understanding
Who are the market leaders? New entrants? In revenue, market share and growth rate
In the U.S.
Western countries
China
Understand the technology flows
Who builds on top of who
Who is critical versus who can be substituted
Understand the economic flows
Who buys from who in this industry
Who buys the output from this industry.
How cyclical is demand?
What are the demand drivers?
How do companies inside each segment get funded? Any differences in capital requirements? Ease of starting, etc.
If applicable, understand the personnel flow for each segment
Do people move just between their segments or up and down through the entire industry?
Where do they get trained?
The beginner’s forecasting method is to simply extrapolate current growth rates forward. But in today’s technology markets, discontinuities are coming fast and furious. Are there other technologies from adjacent markets will impact this one? (e.g. AI, Quantum, High performance computing,…?). Are there other global or national economic initiatives that could change the shape of the market?
Last month the U.S. passed the CHIPS and Science Act, one of the first pieces of national industrial policy – government planning and intervention in a specific industry — in the last 50 years, in this case for semiconductors. After the celebratory champagne has been drunk and the confetti floats to the ground it’s helpful to put the CHIPS Act in context and understand the work that government and private capital have left to do.
Today the United States is in great power competition with China. It’s a contest over which nation’s diplomatic, information, military and economic system will lead the world in the 21st century. And the result is whether we face a Chinese dystopian future or a democratic one, where individuals and nations get to make their own choices. At the heart of this contest is leadership in emerging and disruptive technologies – running the gamut from semiconductors and supercomputers to biotech and blockchain and everything in between.
National Industrial Policy – U.S. versus China Unlike the U.S., China manages its industrial policy via top-down 5-year plans. Their overall goal is to turn China into a technologically advanced and militarily powerful state that can challenge U.S. commercial and military leadership. Unlike the U.S., China has embraced the idea that national security is inexorably intertwined with commercial technology (semiconductors, drones, AI, machine learning, autonomy, biotech, cyber, semiconductors, quantum, high-performance computing, commercial access to space, et al.) They’ve made what they call military/civil fusion – building a dual-use ecosystem by tightly coupling their commercial technology companies with their defense ecosystem.
China has used its last three 5-year plans to invest in critical technologies (semiconductors, supercomputers, Al/ML, quantum, access to space, biotech.) as a national priority. They have built a sophisticated public/private financing ecosystem to support these plans. The Chinese technology funding ecosystem includes regional investment funds that exceed 700 billion dollars (what they call their Civil/Military Guidance Funds). These are investment vehicles in which central and local government agencies make investments that are combined with private venture capital and State-Owned Enterprises in areas of strategic importance. They are tightly coupling critical civilian companies to their defense ecosystem to help them develop military weapons and strategic surprises. (Tai Ming Cheung’s book is the best description of the system.)
The U.S. has nothing comparable.
In contrast, for the last several decades, planning in the U.S. economy was left to “the market.” Driven by economic theory from the Chicago School of Economics, its premise is that free markets best allocate resources in an economy and that minimal, or even no, government intervention is best for economic prosperity. We ran our economy on this theory as a bipartisan experiment in the U.S. for the last several decades. Optimizing profit above else led to wholesale offshoring of manufacturing and entire industries in order to lower costs. Investors shifted to making massive investments in industries with the quickest and greatest returns without long-term capital investments (e.g. social media, ecommerce, gaming) instead of in hardware, semiconductors, advanced manufacturing, transportation infrastructure, etc. The result was that by default, private equity and venture capital were the de facto decision makers of U.S. industrial policy.
With the demise of the Soviet Union and the U.S. as the sole superpower, this “profits first” strategy was “good enough” as there was no other nation that could match our technical superiority. That changed when we weren’t paying attention.
China’s Ambition and Strategic Surprises In the first two decades of the 21st century, while the U.S. was focused on combating non-nation states (ISIS, Al-Qaeda…) U.S. policymakers failed to understand China’s size, scale, ambition, and national commitment to surpass the U.S. as the global leader in technology. Not just in “a” technology but in all of those that are critical to both our national and economic security in this century.
China’s top-down national industrial policy means we are being out-planned, outmanned, and outspent. By some estimates, China could be the leader in a number of critical technology areas sooner than we think. While Chinese investment in technology at times has been redundant and wasteful, the sum of these tech investments has resulted in a series of strategic surprises to the U.S.– hypersonics, ballistic missiles with maneuverable warheads as aircraft carrier killers, fractional orbital bombardment systems, rapid advances in space, semiconductors, supercomputers, and biotech …with more surprises likely – all with the goal to gain superiority over the U.S. both commercially and militarily.
Limits and Obstacles to China’s Dominance However, America has advantages that China lacks: capital markets that can be incented not coerced, untapped innovation talent willing to help, labor markets that can be upskilled, university and corporate research that still excels, etc. At the same time, a few cracks are showing in China’s march to technology supremacy; their detention of some of their most successful entrepreneurs and investors, a crackdown on “superfluous” tech (gaming, online tutoring) and a slowdown of listings on the China’s version of NASDAQ, the Shanghai Stock Exchange’s STAR Market – may signal that the party is reining in its “anything goes” approach to pass the U.S. Simultaneously the U.S. Commerce department has begun to prohibit export of critical equipment and components that China has needed to build their tech ecosystem.
Billionaires and Venture Capital Funding Defense Innovation In the U.S. DoD’s traditional suppliers of defense tools, technologies, and weapons – the prime contractors and federal labs – are no longer the leaders in many of these emerging and disruptive technologies. And while the Department of Defense has world-class people and organizations it’s for a world that no longer exists. (Its inability to rapidly acquire and deploy commercial systems requires an organizational redesign on the scale of Goldwater/Nichols Act, not a reform.)
Technology innovation in many areas now falls to commercial companies. In lieu of a coherent U.S. national investment strategy across emerging and disruptive technologies (think of the CHIPS Act times ten), billionaires in the U.S. have started their own initiatives – Elon Musk – SpaceX and Starlink (reusable rockets and space-based broadband internet), Palmer Lucky – Anduril (AI and Machine Learning for defense), Peter Theil – Palantir (data analytics). And in the last few years a series of defense-focused venture funds – Shield Capital, Lux Capital, and others – have emerged.
However, depending on billionaires interested in defense is not a sustainable strategy, and venture capital invests in businesses that can become profitable in 10 years or less. This means that technologies that might take decades to mature (fusion, activities in space, new industrial processes, …) get caught up and die in a “Valley of Death.” Attempts to bridge this Valley of Death often find technology companies relying on Government capital. These programs (DIU, In-Q-Tel, AFWERX, et al), are limited in scope, time and success at scale. These government investment programs have largely failed to scale these emerging and disruptive technologies for four reasons:
Government agencies have limited access to top investment talent to help them make sophisticated technical investment decisions
Government agencies lack the commercialization skills to help founders turn technical ideas into commercial ventures.
While the Dept of Defense has encouraged starting new ventures, it has failed to match it with the acquisition dollars to scale them. There’s no DoD coherent/committed strategy to create a new generation of prime contractors around these emerging and disruptive technologies.
No private or government funds operates as “patient capital” – investing in critical deep technologies that may take more than a decade to mature and scale
America’s Frontier Fund Today one private capital fund is attempting to solve this problem. Gilman Louie, the founder of In-Q-Tel, has started America’s Frontier Fund (AFF.) This new fund will invest in key critical deep technologies to help the U.S. keep pace with the Chinese onslaught of capital focused on this area. AFF plans to raise one billion dollars in “patient private capital” from both public and private sources and to be entirely focused on identifying critical technologies and strategic investing. Setting up their fund as a non-profit allows them to focus on long-term investments for the country, not just what’s expedient to maximize profits. It will ensure these investments grow into large commercial and dual-use companies focused on the national interest.
They’ve built an extraordinary team of experienced venture capitalists (I’ve known Gilman Louie and Steve Weinstein for decades), a world-class chief scientist, a startup incubation team, and they come with a unique and deep understanding of the intersection of national security and emerging and disruptive technologies.
AFF is the most promising effort I have seen in tackling the long-term challenges of funding and scaling emerging and disruptive technologies head-on.
At stake is whether the rest of the 21st century will be determined by an authoritarian government wiling to impose a dystopian future on the world, or free nations able to determine their own future.
These are tough problems to solve, and no single fund is can take on the massive investments China is making, but it’s possible that the AFF’s market driven approach, when combined with the government’s halting steps reengaging in industrial policy, can tip the scale back in our favor.
How does a newly hired Chief Technology Officer (CTO) find and grow the islands of innovation inside a large company?
How not to waste your first six months as a new CTO thinking you’re making progress when the status quo is working to keep you at bay?
I just had coffee with Anthony, a friend who was just hired as the Chief Technology Officer (CTO) of a large company (30,000+ people.) He previously cofounded several enterprise software startups, and his previous job was building a new innovation organization from scratch inside another large company. But this is the first time he was the CTO of a company this size.
Good News and Bad His good news was that his new company provides essential services and regardless of how much they stumbled they were going to be in business for a long time. But the bad news was that the company wasn’t keeping up with new technologies and new competitors who were moving faster. And the fact that they were an essential service made the internal cultural obstacles for change and innovation that much harder.
We both laughed when he shared that the senior execs told him that all the existing processes and policies were working just fine. It was clear that at least two of the four divisions didn’t really want him there. Some groups think he’s going to muck with their empires. Some of the groups are dysfunctional. Some are, as he said, “world-class people and organizations for a world that no longer exists.”
So the question we were pondering was, how do you quickly infiltrate a large, complex company of that size? How do you put wins on the board and get a coalition working? Perhaps by getting people to agree to common problems and strategies? And/or finding the existing organizational islands of innovation that were already delivering and help them scale?
The Journey Begins In his first week the exec staff had pointed him to the existing corporate incubator. Anthony had long come to the same conclusion I had, that highly visible corporate incubators do a good job of shaping culture and getting great press, but most often their biggest products were demos that never get deployed to the field. Anthony concluded that the incubator in his new company was no exception. Successful organizations recognize that innovation isn’t a single activity (incubators, accelerators, hackathons); it is a strategically organized end-to-end process from idea to deployment.
In addition, he was already discovering that almost every division and function was building groups for innovation, incubation and technology scouting. Yet no one had a single road map for who was doing what across the enterprise. And more importantly it wasn’t clear which, if any, of those groups were actually continuously delivering products and services at high speed. His first job was to build a map of all those activities.
Innovation Heroes are Not Repeatable or Scalable Over coffee Anthony offered that in a company this size he knew he would find “innovation heroes” – the individuals others in the company point to who single-handedly fought the system and got a new product, project or service delivered (see article here.) But if that was all his company had, his work was going to be much tougher than he thought, as innovation heroics as the sole source of deployment of new capabilities are a sign of a dysfunctional organization.
Anthony believed one of his roles as CTO was to:
Map and evaluate all the innovation, incubation and technology scouting activities
Help the company understand they need innovation and execution to occur simultaneously. (This is the concept of an ambidextrous organization (seethis HBR article).)
Educate the company that innovation and execution have different processes, people, and culture. They need each other – and need to respect and depend on each other
Create an innovation pipeline – from problem to deployment – and get it adopted at scale
Anthony was hoping that somewhere three, four or five levels down the organization were the real centers of innovation, where existing departments/groups – not individuals – were already accelerating mission/delivering innovative products/services at high speed. His challenge was to
find these islands of innovation and who was running them and understand if/how they
Leveraged existing company competencies and assets
Understand if/how they co-opted/bypassed existing processes and procedures
Had a continuous customer discovery to create products that customers need and want
Figured out how to deliver with speed and urgency
And if they somehow had made this a repeatable process
If these groups existed, his job as CTO was to take their learning and:
Figure out what barriers the innovation groups were running into and help build innovation processes in parallel to those for execution
Use their work to create a common language and tools for innovation around rapid acceleration of existing mission and delivery
Make permanent delivering products and services at speed with a written innovation doctrine and policy
Instrument the process with metrics and diagnostics
Get out of the office So with another cup of coffee the question we were trying to answer was, how does a newly hired CTO find the real islands of innovation in a company his size?
A first place to start was with the innovation heroes/rebels. They often know where all the innovation bodies were buried. But Anthony’s insight was he needed to get out of his 8th floor office and spend time where his company’s products and services were being developed and delivered.
It was likely that most innovative groups were not simply talking about innovation, but were the ones who rapidly delivering innovative solutions to customer’s needs.
One Last Thing As we were finishing my coffee Anthony said, “I’m going to let a few of the execs know I’m not out for turf because I only intend to be here for a few years.” I almost spit out the rest of my coffee. I asked how many years the division C-level staff has been at the company. “Some of them for decades” he replied. I pointed out that in a large organization saying you’re just “visiting” will set you up for failure, as the executives who have made the company their career will simply wait you out.
As he left, he looked at a bit more concerned than we started. “Looks like I have my work cut out for me.”
Lessons Learned
Large companies often have divisions and functions with innovation, incubation and technology scouting all operating independently with no common language or tools
Innovation isn’t a single activity (incubators, accelerators, hackathons); it is a strategically organized end-to-end process from idea to deployment
Somewhere three, four or five levels down the organization are the real centers of innovation – accelerating mission/delivering innovative products/services at high speed
The CTO’s job is to:
create a common process, language and tools for innovation
make them permanent with a written innovation doctrine and policy
Hundreds of billions in public and private capital is being invested in Artificial Intelligence (AI) and Machine Learning companies. The number of patents filed in 2021 is more than 30 times higher than in 2015 as companies and countries across the world have realized that AI and Machine Learning will be a major disruptor and potentially change the balance of military power.
Until recently, the hype exceeded reality. Today, however, advances in AI in several important areas (here, here, here, here and here) equal and even surpass human capabilities.
If you haven’t paid attention, now’s the time.
Artificial Intelligence and the Department of Defense (DoD)
The Department of Defense has thought that Artificial Intelligence is such a foundational set of technologies that they started a dedicated organization- the JAIC – to enable and implement artificial intelligence across the Department. They provide the infrastructure, tools, and technical expertise for DoD users to successfully build and deploy their AI-accelerated projects.
Some specific defense related AI applications are listed later in this document.
We’re in the Middle of a Revolution Imagine it’s 1950, and you’re a visitor who traveled back in time from today. Your job is to explain the impact computers will have on business, defense and society to people who are using manual calculators and slide rules. You succeed in convincing one company and a government to adopt computers and learn to code much faster than their competitors /adversaries. And they figure out how they could digitally enable their business – supply chain, customer interactions, etc. Think about the competitive edge they’d have by today in business or as a nation. They’d steamroll everyone.
That’s where we are today with Artificial Intelligence and Machine Learning. These technologies will transform businesses and government agencies. Today, 100s of billions of dollars in private capital have been invested in 1,000s of AI startups. The U.S. Department of Defense has created a dedicated organization to ensure its deployment.
But What Is It? Compared to the classic computing we’ve had for the last 75 years, AI has led to new types of applications, e.g. facial recognition; new types of algorithms, e.g. machine learning; new types of computer architectures, e.g. neural nets; new hardware, e.g. GPUs; new types of software developers, e.g. data scientists; all under the overarching theme of artificial intelligence. The sum of these feels like buzzword bingo. But they herald a sea change in what computers are capable of doing, how they do it, and what hardware and software is needed to do it.
This brief will attempt to describe all of it.
New Words to Define Old Things One of the reasons the world of AI/ML is confusing is that it’s created its own language and vocabulary. It uses new words to define programming steps, job descriptions, development tools, etc. But once you understand how the new world maps onto the classic computing world, it starts to make sense. So first a short list of some key definitions.
AI/ML – a shorthand for Artificial Intelligence/Machine Learning
Artificial Intelligence (AI) – a catchall term used to describe “Intelligent machines” which can solve problems, make/suggest decisions and perform tasks that have traditionally required humans to do. AI is not a single thing, but a constellation of different technologies.
Machine Learning (ML) – a subfield of artificial intelligence. Humans combine data with algorithms (see here for a list) to train a model using that data. This trained model can then make predications on new data (is this picture a cat, a dog or a person?) or decision-making processes (like understanding text and images) without being explicitly programmed to do so.
Machine learning algorithms – computer programs that adjust themselves to perform better as they are exposed to more data. The “learning” part of machine learning means these programs change how they process data over time. In other words, a machine-learning algorithm can adjust its own settings, given feedback on its previous performance in making predictions about a collection of data (images, text, etc.).
Deep Learning/Neural Nets – a subfield of machine learning. Neural networks make up the backbone of deep learning. (The “deep” in deep learning refers to the depth of layers in a neural network.) Neural nets are effective at a variety of tasks (e.g., image classification, speech recognition). A deep learning neural net algorithm is given massive volumes of data, and a task to perform – such as classification. The resulting model is capable of solving complex tasks such as recognizing objects within an image and translating speech in real time. In reality, the neural net is a logical concept that gets mapped onto a physical set of specialized processors. See here.)
Data Science – a new field of computer science. Broadly it encompasses data systems and processes aimed at maintaining data sets and deriving meaning out of them. In the context of AI, it’s the practice of people who are doing machine learning.
Data Scientists – responsible for extracting insights that help businesses make decisions. They explore and analyze data using machine learning platforms to create models about customers, processes, risks, or whatever they’re trying to predict.
What’s Different? Why is Machine Learning Possible Now? To understand why AI/Machine Learning can do these things, let’s compare them to computers before AI came on the scene. (Warning – simplified examples below.)
Classic Computers
For the last 75 years computers (we’ll call these classic computers) have both shrunk to pocket size (iPhones) and grown to the size of warehouses (cloud data centers), yet they all continued to operate essentially the same way.
Classic Computers – Programming Classic computers are designed to do anything a human explicitly tells them to do. People (programmers) write software code (programming) to develop applications, thinking a priori about all the rules, logic and knowledge that need to be built in to an application so that it can deliver a specific result. These rules are explicitly coded into a program using a software language (Python, JavaScript, C#, Rust, …).
Classic Computers – Compiling The code is then compiled using software to translate the programmer’s source code into a version that can be run on a target computer/browser/phone. For most of today’s programs, the computer used to develop and compile the code does not have to be that much faster than the one that will run it.
Classic Computers – Running/Executing Programs Once a program is coded and compiled, it can be deployed and run (executed) on a desktop computer, phone, in a browser window, a data center cluster, in special hardware, etc. Programs/applications can be games, social media, office applications, missile guidance systems, bitcoin mining, or even operating systems e.g. Linux, Windows, IOS. These programs run on the same type of classic computer architectures they were programmed in.
Classic Computers – Software Updates, New Features For programs written for classic computers, software developers receive bug reports, monitor for security breaches, and send out regular software updates that fix bugs, increase performance and at times add new features.
Classic Computers- Hardware The CPUs (Central Processing Units) that write and run these Classic Computer applications all have the same basic design (architecture). The CPUs are designed to handle a wide range of tasks quickly in a serial fashion. These CPUs range from Intel X86 chips, and the ARM cores on Apple M1 SoC, to the z15 in IBM mainframes.
Machine Learning
In contrast to programming on classic computing with fixed rules, machine learning is just like it sounds – we can train/teach a computer to “learn by example” by feeding it lots and lots of examples. (For images a rule of thumb is that a machine learning algorithm needs at least 5,000 labeled examples of each category in order to produce an AI model with decent performance.) Once it is trained, the computer runs on its own and can make predictions and/or complex decisions.
Just as traditional programming has three steps – first coding a program, next compiling it and then running it – machine learning also has three steps: training (teaching), pruning and inference (predicting by itself.)
Machine Learning – Training Unlike programing classic computers with explicit rules, training is the process of “teaching” a computer to perform a task e.g. recognize faces, signals, understand text, etc. (Now you know why you’re asked to click on images of traffic lights, cross walks, stop signs, and buses or type the text of scanned image in ReCaptcha.) Humans provide massive volumes of “training data” (the more data, the better the model’s performance) and select the appropriate algorithm to find the best optimized outcome. (See the detailed “machine learning pipeline” section for the gory details.)
By running an algorithm selected by a data scientist on a set of training data, the Machine Learning system generates the rules embedded in a trained model. The system learns from examples (training data), rather than being explicitly programmed. (See the “Types of Machine Learning” section for more detail.) This self-correction is pretty cool. An input to a neural net results in a guess about what that input is. The neural net then takes its guess and compares it to a ground-truth about the data, effectively asking an expert “Did I get this right?” The difference between the network’s guess and the ground truth is its error. The network measures that error, and walks the error back over its model, adjusting weights to the extent that they contributed to the error.)
Just to make the point again: The algorithms combined with the training data – not external human computer programmers – create the rules that the AI uses. The resulting model is capable of solving complex tasks such as recognizing objects it’s never seen before, translating text or speech, or controlling a drone swarm.
(Instead of building a model from scratch you can now buy, for common machine learning tasks, pretrained modelsfrom others and here, much like chip designers buying IP Cores.)
Machine Learning Training – Hardware Training a machine learning model is a very computationally intensive task. AI hardware must be able to perform thousands of multiplications and additions in a mathematical process called matrix multiplication. It requires specialized chips to run fast. (See the AI semiconductor section for details.)
Machine Learning – Simplification via pruning, quantization, distillation Just like classic computer code needs to be compiled and optimized before it is deployed on its target hardware, the machine learning models are simplified and modified (pruned) to use less computing power, energy, and memory before they’re deployed to run on their hardware.
Machine Learning – Inference Phase Once the system has been trained it can be copied to other devices and run. And the computing hardware can now make inferences (predictions) on new data that the model has never seen before.
Inference can even occur locally on edge devices where physical devices meet the digital world (routers, sensors, IOT devices), close to the source of where the data is generated. This reduces network bandwidth issues and eliminates latency issues.
Machine Learning Inference – Hardware Inference (running the model) requires substantially less compute power than training. But inference also benefits from specialized AI chips. (See the AI semiconductor section for details.)
Machine Learning – Performance Monitoring and Retraining Just like classic computers where software developers do regular software updates to fix bugs and increase performance and add features, machine learning models also need to be updated regularly by adding new data to the old training pipelines and running them again. Why?
Over time machine learning models get stale. Their real-world performance generally degrades over time if they are not updated regularly with new training data that matches the changing state of the world. The models need to be monitored and retrained regularly for data and/or concept drift, harmful predictions, performance drops, etc. To stay up to date, the models need to re-learn the patterns by looking at the most recent data that better reflects reality.
One Last Thing – “Verifiability/Explainability” Understanding how an AI works is essential to fostering trust and confidence in AI production models.
Neural Networks and Deep Learning differ from other types of Machine Learning algorithms in that they have low explainability. They can generate a prediction, but it is very difficult to understand or explain how it arrived at its prediction. This “explainability problem” is often described as a problem for all of AI, but it’s primarily a problem for Neural Networks and Deep Learning. Other types of Machine Learning algorithms – for example decision trees or linear regression– have very high explainability. The results of the five-year DARPA Explainable AI Program (XAI) are worth reading here.
So What Can Machine Learning Do?
It’s taken decades but as of today, on its simplest implementations, machine learning applications can do some tasks better and/or faster than humans. Machine Learning is most advanced and widely applied today in processing text (through Natural Language Processing) followed by understanding images and videos (through Computer Vision) and analytics and anomaly detection. For example:
Recognize and Understand Text/Natural Language Processing AI is better than humans on basic reading comprehension benchmarks like SuperGLUE and SQuAD and their performance on complex linguistic tasks is almost there. Applications: GPT-3, M6, OPT-175B, Google Translate, Gmail Autocomplete, Chatbots, Text summarization.
Write Human-like Answers to Questions and Assist in Writing Computer Code An AI can write original text that is indistinguishable from that created by humans. Examples GPT-3, Wu Dao 2.0 or generate computer code. Example GitHub Copilot, Wordtune
Recognize and Understand Images and video streams An AI can see and understand what it sees. It can identify and detect an object or a feature in an image or video. It can even identify faces. It can scan news broadcasts or read and assess text that appears in videos. It has uses in threat detection – airport security, banks, and sporting events. In medicine to interpret MRI’s or to design drugs. And in retail to scan and analyze in-store imagery to intuitively determine inventory movement. Examples of ImageNet benchmarks here and here
Turn 2D Images into 3D Rendered Scenes AI using “NeRFs “neural radiance fields” can take 2d snapshots and render a finished 3D scene in realtime to create avatars or scenes for virtual worlds, to capture video conference participants and their environments in 3D, or to reconstruct scenes for 3D digital maps. The technology is an enabler of the metaverse, generating digital representations of real environments that creators can modify and build on. And self driving cars are using NeRF’s to render city-scale scenes spanning multiple blocks.
Detect Changes in Patterns/Recognize Anomalies An AI can recognize patterns which don’t match the behaviors expected for a particular system, out of millions of different inputs or transactions. These applications can discover evidence of an attack on financial networks, fraud detection in insurance filings or credit card purchases; identify fake reviews; even tag sensor data in industrial facilities that mean there’s a safety issue. Examples here, here and here.
Power Recommendation Engines An AI can provide recommendations based on user behaviors used in ecommerce to provide accurate suggestions of products to users for future purchases based on their shopping history. Examples: Netflix, TikTok, CrossingMinds and Recommendations AI
Recognize and Understand Your Voice An AI can understand spoken language. Then it can comprehend what is being said and in what context. This can enable chatbots to have a conversation with people. It can record and transcribe meetings. (Some versions can even read lips to increase accuracy.) Applications: Siri/Alexa/Google Assistant. Example here
Create Artificial Images AI can create artificial images (DeepFakes) that are indistinguishable from real ones using Generative Adversarial Networks. Useful in entertainment, virtual worlds, gaming, fashion design, etc. Synthetic faces are now indistinguishable and more trustworthy than photos of real people. Paper here.
Create Artist Quality Illustrations from A Written Description AI can generate images from text descriptions, creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways. An example application is Dall-E
Generative Design of Physical Products Engineers can input design goals into AI-driven generative design software, along with parameters such as performance or spatial requirements, materials, manufacturing methods, and cost constraints. The software explores all the possible permutations of a solution, quickly generating design alternatives. Example here.
Sentiment Analysis An AI leverages deep natural language processing, text analysis, and computational linguistics to gain insight into customer opinion, understanding of consumer sentiment, and measuring the impact of marketing strategies. Examples: Brand24, MonkeyLearn
What Does this Mean for Businesses?
Skip this section if you’re interested in national security applications
Hang on to your seat. We’re just at the beginning of the revolution. The next phase of AI, powered by ever increasing powerful AI hardware and cloud clusters, will combine some of these basic algorithms into applications that do things no human can. It will transform business and defense in ways that will create new applications and opportunities.
Human-Machine Teaming Applications with embedded intelligence have already begun to appear thanks to massive language models. For example – Copilot as a pair-programmer in Microsoft Visual Studio VSCode. It’s not hard to imagine DALL-E 2 as an illustration assistant in a photo editing application, or GPT-3 as a writing assistant in Google Docs.
AI in Medicine AI applications are already appearing in radiology, dermatology, and oncology. Examples: IDx-DR,OsteoDetect, Embrace2. AI Medical image identification can automatically detect lesions, and tumors with diagnostics equal to or greater than humans. For Pharma, AI will power drug discovery design for finding new drug candidates. The FDA has a plan for approving AI software here and a list of AI-enabled medical devices here.
Autonomous Vehicles Harder than it first seemed, but car companies like Tesla will eventually get better than human autonomy for highway driving and eventually city streets.
Decision support Advanced virtual assistants can listen to and observe behaviors, build and maintain data models, and predict and recommend actions to assist people with and automate tasks that were previously only possible for humans to accomplish.
Supply chain management AI applications are already appearing in predictive maintenance, risk management, procurement, order fulfillment, supply chain planning and promotion management.
Marketing AI applications are already appearing in real-time personalization, content and media optimization and campaign orchestration to augment, streamline and automate marketing processes and tasks constrained by human costs and capability, and to uncover new customer insights and accelerate deployment at scale.
Making business smarter: Customer Support AI applications are already appearing in virtual customer assistants with speech recognition, sentiment analysis, automated/augmented quality assurance and other technologies providing customers with 24/7 self- and assisted-service options across channels.
AI in National Security
Much like the dual-use/dual-nature of classical computers AI developed for commercial applications can also be used for national security.
AI/ML and Ubiquitous Technical Surveillance AI/ML have made most cities untenable for traditional tradecraft. Machine learning can integrate travel data (customs, airline, train, car rental, hotel, license plate readers…,) integrate feeds from CCTV cameras for facial recognition and gait recognition, breadcrumbs from wireless devices and then combine it with DNA sampling. The result is automated persistent surveillance.
China’s employment of AI as a tool of repression and surveillance of the Uyghurs is a reminder of a dystopian future of how totalitarian regimes will use AI-enabled ubiquitous surveillance to repress and monitor its own populace.
AI/ML on the Battlefield AI will enable new levels of performance and autonomy for weapon systems. Autonomously collaborating assets (e.g., drone swarms, ground vehicles) that can coordinate attacks, ISR missions, & more.
Fusing and making sense of sensor data (detecting threats in optical /SAR imagery, classifying aircraft based on radar returns, searching for anomalies in radio frequency signatures, etc.) Machine learning is better and faster than humans in finding targets hidden in a high-clutter background. Automated target detection and fires from satellite/UAV.
Use AI/ML countermeasures against adversarial, low probability of intercept/low probability of detection (LPI/LPD) radar techniques in radar and communication systems.
AI/ML in Collection The front end of intelligence collection platforms has created a firehose of data that have overwhelmed human analysts. “Smart” sensors coupled with inference engines that can pre-process raw intelligence and prioritize what data to transmit and store –helpful in degraded or low-bandwidth environments.
Human-Machine Teaming in Signals Intelligence Applications with embedded intelligence have already begun to appear in commercial applications thanks to massive language models. For example – Copilot as a pair-programmer in Microsoft Visual Studio VSCode. It’s not hard to imagine an AI that can detect and isolate anomalies and other patterns of interest in all sorts of signal data faster and more reliably than human operators.
AI-enabled natural language processing, computer vision, and audiovisual analysis can vastly reduce manual data processing. Advances in speech-to-text transcription and language analytics now enable reading comprehension, question answering, and automated summarization of large quantities of text. This not only prioritizes the work of human analysts, it’s a major force multiplier
AI can also be used to automate data conversion such as translations and decryptions, accelerating the ability to derive actionable insights.
Human-Machine Teaming in Tasking and Dissemination AI-enabled systems will automate and optimize tasking and collection for platforms, sensors, and assets in near-real time in response to dynamic intelligence requirements or changes in the environment.
AI will be able to automatically generate machine-readable versions of intelligence products and disseminate them at machine speed so that computer systems across the IC and the military can ingest and use them in real time without manual intervention.
Human-Machine Teaming in Exploitation and Analytics AI-enabled tools can augment filtering, flagging, and triage across multiple data sets. They can identify connections and correlations more efficiently and at a greater scale than human analysts, and can flag those findings and the most important content for human analysis.
AI can fuse data from multiple sources, types of intelligence, and classification levels to produce accurate predictive analysis in a way that is not currently possible. This can improve indications and warnings for military operations and active cyber defense.
AI/ML Information warfare Nation states have used AI systems to enhance disinformation campaigns and cyberattacks. This included using “DeepFakes” (fake videos generated by a neural network that are nearly indistinguishable from reality). They are harvesting data on Americans to build profiles of our beliefs, behavior, and biological makeup for tailored attempts to manipulate or coerce individuals.
But because a large percentage of it is open-source AI is not limited to nation states, AI-powered cyber-attacks, deepfakes and AI software paired with commercially available drones can create “poor-man’s smart weapons” for use by rogue states, terrorists and criminals.
AI/ML Cyberwarfare AI-enabled malware can learn and adapt to a system’s defensive measures, by probing a target system to look for system configuration and operational patterns and customize the attack payload to determine the most opportune time to execute the payload so to maximize the impact. Conversely, AI-enabled cyber-defensive tools can proactively locate and address network anomalies and system vulnerabilities.
Attacks Against AI – Adversarial AI As AI proliferates, defeating adversaries will be predicated on defeating their AI and vice versa. As Neural Networks take over sensor processing and triage tasks, a human may only be alerted if the AI deems it suspicious. Therefore, we only need to defeat the AI to evade detection, not necessarily a human.
Adversarial attacks against AI fall into three types:
Synthetic data generation-to feed false information
Data analysis – for AI-assisted classical attack generation
AI Attack Surfaces Electronic Attack (EA), Electronic Protection (EP), Electronic Support (ES) all have analogues in the AI algorithmic domain. In the future, we may play the same game about the “Algorithmic Spectrum,” denying our adversaries their AI capabilities while defending ours. Other can steal or poison our models or manipulate our training data.
What Makes AI Possible Now?
Four changes make Machine Learning possible now:
Massive Data Sets
Improved Machine Learning algorithms
Open-Source Code, Pretrained Models and Frameworks
More computing power
Massive Data Sets Machine Learning algorithms tend to require large quantities of training data in order to produce high-performance AI models. (Training OpenAI’s GPT-3 Natural Language Model with 175 billion parameters takes 1,024 Nvidia A100 GPUs more than one month.) Today, strategic and tactical sensors pour in a firehose of images, signals and other data. Billions of computers, digital devices and sensors connected to the Internet, producing and storing large volumes of data, which provide other sources of intelligence. For example facial recognition requires millions of labeled images of faces for training data.
Of course more data only helps if the data is relevant to your desired application. Training data needs to match the real-world operational data very, very closely to train a high-performing AI model.
Improved Machine Learning algorithms The first Machine Learning algorithms are decades old, and some remain incredibly useful. However, researchers have discovered new algorithms that have greatly sped up the fields cutting-edge. These new algorithms have made Machine Learning models more flexible, more robust, and more capable of solving different types of problems.
Open-Source Code, Pretrained Models and Frameworks Previously, developing Machine Learning systems required a lot of expertise and custom software development that made it out of reach for most organizations. Now open-source code libraries and developer tools allow organizations to use and build upon the work of external communities. No team or organization has to start from scratch, and many parts that used to require highly specialized expertise have been automated. Even non-experts and beginners can create useful AI tools. In some cases, open-source ML models can be entirely reused and purchased. Combined with standard competitions, open source, pretrained models and frameworks have moved the field forward faster than any federal lab or contractor. It’s been a feeding frenzy with the best and brightest researchers trying to one-up each other to prove which ideas are best.
The downside is that, unlike past DoD technology development – where the DoD leads it, can control it, and has the most advanced technology (like stealth and electronic warfare), in most cases the DoD will not have the most advanced algorithms or models. The analogy for AI is closer to microelectronics than it is EW. The path forward for the DoD should be supporting open research, but optimizing on data set collection, harvesting research results, and fast application.
More computing power – special chips Machine Learning systems require a lot of computing power. Today, it’s possible to run Machine Learning algorithms on massive datasets using commodity Graphics Processing Units (GPUs). While many of the AI performance improvements have been due to human cleverness on better models and algorithms, most of the performance gains have been the massive increase in compute performance. (See the semiconductor section.)
More computing power – AI In the Cloud The rapid growth in the size of machine learning models has been achieved by the move to large data center clusters. The size of machine learning models are limited by time to train them. For example, in training images, the size of the model scales with the number of pixels in an image. ImageNet Model sizes are 224×224 pixels. But HD (1920×1080) images require 40x more computation/memory. Large Natural Language Processing models – e.g. summarizing articles, English-to-Chinese translation like OpenAI’s GPT-3 require enormous models. GPT-3 uses 175 billion parameters and was trained on a cluster with 1,024 Nvidia A100 GPUs that cost ~$25 million! (Which is why large clusters exist in the cloud, or the largest companies/ government agencies.) Facebook’s Deep Learning and Recommendation Model (DLRM) was trained on 1TB data and has 24 billion parameters. Some cloud vendors train on >10TB data sets.
Instead of investing in massive amounts of computers needed for training companies can use the enormous on-demand, off-premises hardware in the cloud (e.g. Amazon AWS, Microsoft Azure) for both training machine learning models and deploying inferences.
We’re Just Getting Started Progress in AI has been growing exponentially. The next 10 years will see a massive improvement on AI inference and training capabilities. This will require regular refreshes of the hardware– on the chip and cloud clusters – to take advantage. This is the AI version of Moore’s Law on steroids – applications that are completely infeasible today will be easy in 5 years.
What Can’t AI Do?
While AI can do a lot of things better than humans when focused on a narrow objective, there are many things it still can’t do. AI works well in specific domain where you have lots of data, time/resources to train, domain expertise to set the right goals/rewards during training, but that is not always the case.
For example AI models are only as good as the fidelity and quality of the training data. Having bad labels can wreak havoc on your training results. Protecting the integrity of the training data is critical.
In addition, AI is easily fooled by out-of-domain data (things it hasn’t seen before). This can happen by “overfitting” – when a model trains for too long on sample data or when the model is too complex, it can start to learn the “noise,” or irrelevant information, within the dataset. When the model memorizes the noise and fits too closely to the training set, the model becomes “overfitted,” and it is unable to generalize well to new data. If a model cannot generalize well to new data, then it will not be able to perform the classification or prediction tasks it was intended for. However, if you pause too early or exclude too many important features, you may encounter the opposite problem, and instead, you may “underfit” your model. Underfitting occurs when the model has not trained for enough time, or the input variables are not significant enough to determine a meaningful relationship between the input and output variables.
AI is also poor at estimating uncertainty /confidence (and explaining its decision-making). It can’t choose its own goals. (Executives need to define the decision that the AI will execute. Without well-defined decisions to be made, data scientists will waste time, energy and money.) Except for simple cases an AI can’t (yet) figure out cause and effect or why something happened. It can’t think creatively or apply common sense.
AI is not very good at creating a strategy (unless it can pull from previous examples and mimic them, but then fails with the unexpected.) And it lacks generalized intelligence e.g. that can generalize knowledge and translate learning across domains.
All of these are research topics actively being worked on. Solving these will take a combination of high-performance computing, advanced AI/ML semiconductors, creative machine learning implementations and decision science. Some may be solved in the next decade, at least to a level where a human can’t tell the difference.
Where is AI in Business Going Next?
Skip this section if you’re interested in national security applications
Just as classic computers were applied to a broad set of business, science and military applications, AI is doing the same. AI is exploding not only in research and infrastructure (which go wide) but also in the application of AI to vertical problems (which go deep and depend more than ever on expertise). Some of the new applications on the horizon include Human AI/Teaming (AI helping in programming and decision making), smarter robotics and autonomous vehicles, AI-driven drug discovery and design, healthcare diagnostics, chip electronic design automation, and basic science research.
Advances in language understanding are being pursued to create systems that can summarize complex inputs and engage through human-like conversation, a critical component of next-generation teaming.
Where is AI and National Security Going Next?
In the near future AI may be able to predict the future actions an adversary could take and the actions a friendly force could take to counter these. The 20th century model loop of Observe–Orient–Decide and Act (OODA) is retrospective; an observation cannot be made until after the event has occurred. An AI-enabled decision-making cycle might be ‘sense–predict–agree–act’: AI senses the environment; predicts what the adversary might do and offers what a future friendly force response should be; the human part of the human–machine team agrees with this assessment; and AI acts by sending machine-to-machine instructions to the small, agile and many autonomous warfighting assets deployed en masse across the battlefield.
A Once-in-a-Generation Event Imagine it’s the 1980’s and you’re in charge of an intelligence agency. SIGINT and COMINT were analog and RF. You had worldwide collection systems with bespoke systems in space, air, underwater, etc. And you wake up to a world that shifts from copper to fiber. Most of your people, and equipment are going to be obsolete, and you need to learn how to capture those new bits. Almost every business processes needed to change, new organizations needed to be created, new skills were needed, and old ones were obsoleted. That’s what AI/ML is going to do to you and your agency.
The primary obstacle to innovation in national security is not technology, it is culture. The DoD and IC must overcome a host of institutional, bureaucratic, and policy challenges to adopting and integrating these new technologies. Many parts of our culture are resistant to change, reliant on traditional tradecraft and means of collection, and averse to risk-taking, (particularly acquiring and adopting new technologies and integrating outside information sources.)
History tells us that late adopters fall by the wayside as more agile and opportunistic governments master new technologies.
Carpe Diem.
Want more Detail?
Read on if you want to know about Machine Learning chips, see a sample Machine Learning Pipeline and learn about the four types of Machine Learning.
Skip this section if all you need to know is that special chips are used for AI/ML.
AI/ML, semiconductors, and high-performance computing are intimately intertwined – and progress in each is dependent on the others. (See the “Semiconductor Ecosystem” report.)
Some machine learning models can have trillions of parameters and require a massive number of specialized AI chips to run. Edge computers are significantly less powerful than the massive compute power that’s located at data centers and the cloud. They need low power and specialized silicon.
Why Dedicated AI Chips and Chip Speed Matter Dedicated chips for neutral nets (e.g. Nvidia GPUs, Xilinx FPUs, Google TPUs) are faster than conventional CPUs for three reasons: 1) they use parallelization, 2) they have larger memory bandwidth and 3) they have fast memory access.
There are three types of AI Chips:
Graphics Processing Units (GPUs) – Thousands of cores, parallel workloads, widespread use in machine learning
Field-Programmable Gate Arrays (FPGAs) – Good for algorithms; compression, video encoding, cryptocurrency, genomics, search. Needs specialists to program
Application-Specific Integrated Circuits (ASICs) – custom chips e.g. Google TPU’s
Matrix multiplication plays a big part in neural network computations, especially if there are many layers and nodes. Graphics Processing Units (GPUs) contain 100s or 1,000s of cores that can do these multiplications simultaneously. And neural networks are inherently parallel which means that it’s easy to run a program across the cores and clusters of these processors. That makes AI chips 10s or even 1,000s of times faster and more efficient than classic CPUs for training and inference of AI algorithms. State-of-the-art AI chips are dramatically more cost-effective than state-of-the-art CPUs as a result of their greater efficiency for AI algorithms.
Cutting-edge AI systems require not only AI-specific chips, but state-of-the-art AI chips. Older AI chips incur huge energy consumption costs that quickly balloon to unaffordable levels. Using older AI chips today means overall costs and slowdowns at least an order of magnitude greater than for state-of- the-art AI chips.
Cost and speed make it virtually impossible to develop and deploy cutting-edge AI algorithms without state-of-the-art AI chips. Even with state-of-the-art AI chips, training a large AI algorithm can cost tens of millions of dollars and take weeks to complete. With general-purpose chips like CPUs or older AI chips, this training would take much longer and cost orders of magnitude more, making staying at the R&D frontier impossible. Similarly, performing inference using less advanced or less specialized chips could involve similar cost overruns and take orders of magnitude longer.
In addition to off-the-shelf AI chips from Nvidia, Xlinix and Intel, large companies like Facebook, Google, Amazon, have designed their own chips to accelerate AI. The opportunity is so large that there are hundreds of AI accelerator startups designing their own chips, funded by 10’s of billions of venture capital and private equity. None of these companies own a chip manufacturing plant (a fab) so they all use a foundry (an independent company that makes chips for others) like TSMC in Taiwan (or SMIC in China for for its defense related silicon.)
A Sample of AI GPU, FPGA and ASIC AI Chips and Where They’re Made
IP (Intellectual Property) Vendors Also Offer AI Accelerators AI chip designers can buy AI IP Cores – prebuilt AI accelerators from Synopsys (EV7x,) Cadence (Tensilica AI,) Arm (Ethos,) Ceva (SensPro2, NeuPro), Imagination (Series4,) ThinkSilicon (Neox,) FlexLogic (eFPGA,) Edgecortix and others.
Other AI Hardware Architectures Spiking Neural Networks (SNN) is a completely different approach from Deep Neural Nets. A form of Neuromorphic computing it tries to emulate how a brain works. SNN neurons use simple counters and adders—no matrix multiply hardware is needed and power consumption is much lower. SNNs are good at unsupervised learning – e.g. detecting patterns in unlabeled data streams. Combined with their low power they’re a good fit for sensors at the edge. Examples: BrainChip, GrAI Matter, Innatera, Intel.
Analog Machine Learning AI chips use analog circuits to do the matrix multiplication in memory. The result is extremely low power AI for always-on sensors. Examples: Mythic (AMP,) Aspinity (AML100,) Tetramem.
Optical (Photonics) AI Computation promise performance gains over standard digital silicon, and some are nearing production. They use intersecting coherent light beams rather than switching transistors to perform matrix multiplies. Computation happens in picoseconds and requires only power for the laser. (Though off-chip digital transitions still limit power savings.) Examples: Lightmatter, Lightelligence, Luminous, Lighton.
AI Hardware for the Edge As more AI moves to the edge, the Edge AI accelerator market is segmenting into high-end chips for camera-based systems and low-power chips for simple sensors. For example:
AI Chips in Autonomous vehicles, Augmented Reality and multicamera surveillance systems These inference engines require high performance. Examples: Nvidia (Orin,) AMD (Versal,) Qualcomm (Cloud AI 100,) and acquired Arriver for automotive software.
AI Chips in Cameras for facial recognition, surveillance. These inference chips require a balance of processing power with low power. Putting an AI chip in each camera reduces latency and bandwidth. Examples: Hailo-8, Ambarella CV5S, Quadric (Q16), (RealTek 3916N).
Ultralow-Power AI Chips Target IoT Sensors – IoT devices require very simple neural networks and can run for years on a single battery. Example applications: Presence detection, wakeword detection, gunshot detection… Examples: Syntiant (NDP,)Innatera, BrainChip
Running on the edge devices are deep learning models such as OmniML, Foghorn, specifically designed for edge accelerators.
AI/ML Hardware Benchmarks While there are lots of claims about how much faster each of these chips are for AI/ML there are now a set of standard benchmarks – MLCommons. These benchmarks were created by Google, Baidu, Stanford, Harvard and U.C. Berkeley.
One Last Thing – Non-Nvidia AI Chips and the “Nvidia Software Moat” New AI accelerator chips have to cross the software moat that Nvidia has built around their GPU’s. As popular AI applications and frameworks are built on Nvidia CUDA software platform, if new AI Accelerator vendors want to port these applications to their chips they have to build their own drivers, compiler, debugger, and other tools.
Details of a machine learning pipeline
This is a sample of the workflow (a pipeline) data scientists use to develop, deploy and maintain a machine learning model (see the detailed description here.)
The Types of Machine Learning
skip this section if you want to believe it’s magic.
Machine Learning algorithms fall into four classes:
Supervised Learning
Unsupervised Learning
Semi-supervised Learning
Reinforcement Learning
They differ based on:
What types of data their algorithms can work with
For supervised and unsupervised learning, whether or not the training data is labeled or unlabeled
How the system receives its data inputs
Supervised Learning
A “supervisor” (a human or a software system) accurately labels each of the training data inputs with its correct associated output
Note that pre-labeled data is only required for the training data that the algorithm uses to train the AI mode
In operation in the inference phase the AI will be generating its own labels, the accuracy of which will depend on the AI’s training
Supervised Learning can achieve extremely high performance, but they require very large, labeled datasets
Using labeled inputs and outputs, the model can measure its accuracy and learn over time
For images a rule of thumb is that the algorithm needs at least 5,000 labeled examples of each category in order to produce an AI model with decent performance
In supervised learning, the algorithm “learns” from the training dataset by iteratively making predictions on the data and adjusting for the correct answer.
While supervised learning models tend to be more accurate than unsupervised learning models, they require upfront human intervention to label the data appropriately.
Supervised Machine Learning – Categories and Examples:
Classification problems – use an algorithm to assign data into specific categories, such as separating apples from oranges. Or classify spam in a separate folder from your inbox. Linear classifiers, support vector machines, decision trees and random forest are all common types of classification algorithms.
Regression– understands the relationship between dependent and independent variables. Helpful for predicting numerical values based on different data points, such as sales revenue projections for a given business. Some popular regression algorithms are linear regression, logistic regression and polynomial regression.
Example algorithms include: Logistic Regression and Back Propagation Neural Networks
Unsupervised Learning
These algorithms can analyze and cluster unlabeled data sets. They discover hidden patterns in data without the need for human intervention (hence, they are “unsupervised”)
They can extract features from the data without a label for the results
For an image classifier, an unsupervised algorithm would not identify the image as a “cat” or a “dog.” Instead, it would sort the training dataset into various groups based on their similarity
Unsupervised Learning systems are often less predictable, but as unlabeled data is usually more available than labeled data, they are important
Unsupervised algorithms are useful when developers want to understand their own datasets and see what properties might be useful in either developing automation or change operational practices and policies
They still require some human intervention for validating the output
Unsupervised Machine Learning – Categories and Examples
Clustering groups unlabeled data based on their similarities or differences. For example, K-means clustering algorithms assign similar data points into groups, where the K value represents the size of the grouping and granularity. This technique is helpful for market segmentation, image compression, etc.
Association finds relationships between variables in a given dataset. These methods are frequently used for market basket analysis and recommendation engines, along the lines of “Customers Who Bought This Item Also Bought” recommendations.
Dimensionality reduction is used when the number of features (or dimensions) in a given dataset is too high. It reduces the number of data inputs to a manageable size while also preserving the data integrity. Often, this technique is used in the preprocessing data stage, such as when autoencoders remove noise from visual data to improve picture quality.
Example algorithms include: Apriori algorithm and K-Means
Difference between supervised and unsupervised learning
The main difference: Labeled data
Goals: In supervised learning, the goal is to predict outcomes for new data. You know up front the type of results to expect. With an unsupervised learning algorithm, the goal is to get insights from large volumes of new data. The machine learning itself determines what is different or interesting from the dataset.
Applications: Supervised learning models are ideal for spam detection, sentiment analysis, weather forecasting and pricing predictions, among other things. In contrast, unsupervised learning is a great fit for anomaly detection, recommendation engines, customer personas and medical imaging.
Complexity: Supervised learning is a simple method for machine learning, typically calculated through the use of programs like R or Python.In unsupervised learning, you need powerful tools for working with large amounts of unclassified data. Unsupervised learning models are computationally complex because they need a large training set to produce intended outcomes.
Drawbacks: Supervised learning models can be time-consuming to train, and the labels for input and output variables require expertise. Meanwhile, unsupervised learning methods can have wildly inaccurate results unless you have human intervention to validate the output variables.
Semi-Supervised Learning
“Semi- Supervised” algorithms combine techniques from Supervised and Unsupervised algorithms for applications with a small set of labeled data and a large set of unlabeled data.
In practice, using them leads to exactly what you would expect, a mix of some of both of the strengths and weaknesses of Supervised and Unsupervised approaches
Typical algorithms are extensions to other flexible methods that make assumptions about how to model the unlabeled data. An example is Generative Adversarial Networks trained on photographs can generate new photographs that look authentic to human observers (deep fakes)
Reinforcement Learning
Training data is collected by an autonomous, self-directed AI agent as it perceives its environment and performs goal-directed actions
The rewards are input data received by the AI agent when certain criteria are satisfied.
These criteria are typically unknown to the agent at the start of training
Rewards often contain only partial information. They don’t signal which inputs were good or not
The system is learning to take actions to maximize its receipt of cumulative rewards
Reinforcement AI can defeat humans– in chess, Go…
There are no labeled datasets for every possible move
There is no assessment of whether it was a “good or bad move
Instead, partial labels reveal the final outcome “win” or “lose”
The algorithms explore the space of possible actions to learn the optimal set of rules for determining the best action that maximize wins
Reinforcement Machine Learning – Categories and Examples
AlphaGo, a Reinforcement system played 4.9 million games of Go in 3 days against itself to learn how to play the game at a world-champion level
Reinforcement is challenging to use in the real world, as the real world is not as heavily bounded as video games and time cannot be sped up in the real world
There are consequences to failure in the real world
Portions of this post previously appeared in War On the Rocks.
Looking at a satellite image of Ukraine online I realized it was from Capella Space – one of our Hacking for Defense student teams who now has 7 satellites in orbit.
National Security is Now Dependent on Commercial Technology
They’re not the only startup in this fight. An entire wave of new startups and scaleups are providing satellite imagery and analysis, satellite communications, and unmanned aerial vehicles supporting the struggle.
For decades, satellites that took detailed pictures of Earth were only available to governments and the high-resolution images were classified. Today, commercial companies have their own satellites providing unclassified imagery. The government buys and distributes commercial images from startups to supplement their own and shares them with Ukraine as part of a broader intelligence-sharing arrangement that the head of Defense Intelligence Agency described as “revolutionary.” By the end of the decade, there will be 1000 commercial satellites for every U.S. government satellite in orbit.
At the onset of the war in Ukraine, Russia launched a cyber-attack on Viasat’s KA-SAT satellite, which supplies Internet across Europe, including to Ukraine. In response, to a (tweeted) request from Ukraine’s vice prime minister, Elon Musk’s Starlink satellite company shipped thousands of their satellite dishes and got Ukraine back on the Internet. Other startups are providing portable cell towers – “backpackable” and fixed. When these connect via satellite link, they can provide phone service and WIFI capability. Another startup is providing a resilient, mesh local area network for secure tactical communications supporting ground units.
Drone technology was initially only available to national governments and militaries but is now democratized to low price points and available as internet purchases. In Ukraine, drones from startups are being used as automated delivery vehicles for resupply, and for tactical reconnaissance to discover where threats are. When combined with commercial satellite imagery, this enables pinpoint accuracy to deliver maximum kinetic impact in stopping opposing forces.
Equipment from large military contractors and other countries is also part of the effort. However, the equipment listed above is available commercially off-the-shelf, at dramatically cheaper prices than what’s offered by the large existing defense contractors, and developed and delivered in a fraction of the time. The Ukraine conflict is demonstrating the changing character of war such that low-cost emerging commercial technology is extremely effective when deployed against a larger 20th-century industrialized force that Russia is fielding.
While we should celebrate the organizations that have created and fielded these systems, the battle for the Ukraine illustrates much larger issues in the Department of Defense.
For the first time ever our national security is inexorably intertwined with commercial technology (drones, AI, machine learning, autonomy, biotech, cyber, semiconductors, quantum, high-performance computing, commercial access to space, et al.) And as we’re seeing on the Ukrainian battlefield they are changing the balance of power.
The DoD’s traditional suppliers of defense tools, technologies, and weapons – the prime contractors and federal labs – are no longer the leaders in these next-generation technologies – drones, AI, machine learning, semiconductors, quantum, autonomy, biotech, cyber, quantum, high performance computing, et al. They know this and know that weapons that can be built at a fraction of the cost and upgraded via software will destroy their existing business models.
Venture capital and startups have spent 50 years institutionalizing the rapid delivery of disruptive innovation. In the U.S., private investors spent $300 billion last year to fund new ventures that can move with the speed and urgency that the DoD now requires. Meanwhile China has been engaged in a Civil/Military Fusion program since 2015 to harness these disruptive commercial technologies for its national security needs.
China – Civil/Military Fusion
Every year the Secretary of Defense has to issue a formal report to Congress: Military and Security Developments Involving the People’s Republic of China. Six pages of this year’s report describe how China is combining its military-civilian sectors as a national effort for the PRC to develop a “world-class” military and become a world leader in science and technology. A key part of Beijing’s strategy includes developing and acquiring advanced dual-use technology. It’s worth thinking about what this means – China is not just using its traditional military contractors to build its defense ecosystem; they’re mobilizing their entire economy – commercial plus military suppliers. And we’re not.
DoD’s Civil/Military Orphan-Child – the Defense Innovation Unit In 2015, before China started its Civil/Military effort, then-Secretary of Defense Ash Carter, saw the need for the DoD to understand, embrace and acquire commercial technology. To do so he started the Defense Innovation Unit (DIU). With offices in Silicon Valley, Austin, Boston, Chicago and Washington, DC, this is the one DoD organization with the staffing and mandate to match commercial startups or scaleups to pressing national security problems. DIU bridges the divide between DOD requirements and the commercial technology needed to address them with speed and urgency. It accelerates the connection of commercial technology to the military. Just as importantly, DIU helps the Department of Defense learn how to innovate at the same speed as tech-driven companies.
Many of the startups providing Ukraine satellite imagery and analysis, satellite communications, and unmanned aerial vehicles were found by the Defense Innovation Unit (DIU). Given that DIU is the Department of Defense’s most successful organization in developing and acquiring advanced dual-use technology, one would expect the department to scale the Defense Innovation Unit by a factor of ten. (Two years ago, the House Armed Services Committee in its Future of Defense Task Force report recommended exactly that—a 10X increase in budget.) The threats are too imminent and stakes too high not to do so.
So what happened?
Congress cut their budget by 20%.
And their well-regarded director just resigned in frustration because the Department is not resourcing DIU nor moving fast enough or broadly enough in adopting commercial technology.
Why? The Defense Ecosystem is at a turning point. Defense innovation threatens entrenched interests. Given that the Pentagon budget is essentially fixed, creating new vendors and new national champions of the next generation of defense technologies becomes a zero-sum game.
The Defense Innovation Unit (DIU) had no advocates in its chain of command willing to go to bat for it, let alone scale it.
The Department of Defense has world-class people and organization for a world that no longer exists The Pentagon’s relationship with startups and commercial companies, already an arms-length one, is hindered by a profound lack of understanding about how the commercial innovation ecosystem works and its failure of imagination about what venture and private equity funded innovation could offer. In the last few years new venture capital and private equity firms have raised money to invest in dual-use startups. New startups focused on national security have sprung up and they and their investors have been banging on the closed doors of the defense department.
If we want to keep pace with our adversaries, we need to stop acting like we can compete with one hand tied behind our back. We need a radical reinvention of our civil/military innovation relationship. This would use Department of Defense funding, private capital, dual-use startups, existing prime contractors and federal labs in a new configuration that could look like this:
Create a new defense ecosystem encompassing startups, and mid-sized companies at the bleeding edge, prime contractors as integrators of advanced technology, federally funded R&D centers refocused on areas not covered by commercial tech (nuclear and hypersonics). Make it permanent by creating an innovation doctrine/policy.
Reorganize DoD Research and Engineering to allocate its budget and resources equally between traditional sources of innovation and new commercial sources of innovation.
Scale new entrants to the defense industrial base in dual-use commercial tech – AI/ML, Quantum, Space, drones, autonomy, biotech, underwater vehicles, shipyards, etc. that are not the traditional vendors. Do this by picking winners. Don’t give out door prizes. Contracts should be >$100M so high-quality venture-funded companies will play. And issue debt/loans to startups.
Acquire at Speed. Today, the average Department of Defense major acquisition program takes anywhere from nine to 26 years to get a weapon in the hands of a warfighter. DoD needs a requirements, budgeting and acquisition process that operates at commercial speed (18 months or less) which is 10x faster than DoD procurement cycles. Instead of writing requirements, the department should rapidly assess solutions and engage warfighters in assessing and prototyping commercial solutions. We’ll know we’ve built the right ecosystem when a significant number of major defense acquisition programs are from new entrants.
Acquire with a commercially oriented process. Congress has already granted the Department of Defense “Other Transaction Authority” (OTA) as a way to streamline acquisitions so they do not need to use Federal Acquisition Regulations (FAR). DIU has created a “Commercial Solutions Opening” to mirror a commercial procurement process that leverages OTA. DoD could be applying Commercial Solutions Openings on a much faster and broader scale.
Integrate and create incentives for the Venture Capital/Private Equity ecosystem to invest at scale. The most important incentive would be for DoD to provide significant contracts for new entrants. (One new entrant which DIU introduced, Anduril, just received a follow-on contract for $1 billion. This should be one of many such contracts and not an isolated example.) More examples could include: matching dollars for national security investments (similar to the SBIR program but for investors), public/private partnership investment funds, incentivize venture capital funds with no-carry loans (debt funding) to, or tax holidays and incentives – to get $10’s of billions of private investment dollars in technology areas of national interest.
Coordinate with Allies. Expand theNational Security Innovation Base (NSIB) to an Allied Security Innovation Base. Source commercial technology from allies.
This is a politically impossible problem for the Defense Department to solve alone. Changes at this scale will require Congressional and executive office action. Hard to imagine in the polarized political environment. But not impossible.
Put Different People in Charge and reorganize around this new ecosystem. The threats, speed of change, and technologies the United States faces in this century require radically different mindsets and approaches than those it faced in the 20th century. Today’s leaders in the DoD, executive branch and Congress haven’t fully grasped the size, scale, and opportunity of the commercial innovation ecosystem or how to build innovation processes to move with the speed and urgency to match the pace China has set.
Change is hard – on the people and organizations inside the DoD who’ve spent years operating with one mindset to be asked to pivot to a new one.
But America’s adversaries have exploited the boundaries and borders between its defense and commercial and economic interests. Current approaches to innovation across the government — both in the past and under the current administration — are piecemeal, incremental, increasingly less relevant, and insufficient.
These are not problems of technology. It takes imagination, vision and the willingness to confront the status quo. So far, all are currently lacking.
Russia’s Black Sea flagship Moskva on the bottom of the ocean and the thousands of its destroyed tanks illustrate the consequences of a defense ecosystem living in the past. We need transformation not half-measures. The U.S. Department of Defense needs to change.
At the turn of the century after the dotcom crash, startup valuations plummeted, burn rates were unsustainable, and startups were quickly running out of cash. Most existing investors (those still in business) hoarded their money and stopped doing follow-on rounds until the rubble had cleared.
Except, that is, for the bottom feeders of the Venture Capital business – investors who “cram down” their companies. They offered desperate founders more cash but insisted on new terms, rewriting all the old stock agreements that previous investors and employees had. For existing investors, sometimes it was a “pay-to-play” i.e. if you don’t participate in the new financing you lose. Other times it was simply a take-it-or-leave-it, here are the new terms. Some even insisted that all prior preferred stock had to be converted to common stock. For the common shareholders (employees, advisors, and previous investors), a cram down is a big middle finger, as it comes with reverse split – meaning your common shares are now worth 1/10th, 1/100th or even 1/1000th of their previous value.
(A cram down is different than a down round. A down round is when a company raises money at valuation that is lower than the company’s valuation in its prior financing round. But it doesn’t come with a massive reverse split or change in terms.)
They’re Back While cram downs never went away, the flood of capital in the last decade meant that most companies could raise another round. But now with the economic conditions changing, that’s no longer true. Startups that can’t find product/market fit and/or generate sufficient revenue and/or lacked patient capital are scrambling for dollars – and the bottom feeders are happy to help.
Why do VCs Do This? VCs will wave all kinds of reasons why – “it’s my fiduciary responsibility (which is BS because venture capital is a power-law business, not a “salvage every penny business”) or “it’s just good business” or “we’re opportunistic.” On one hand they’re right. Venture capital, like most private equity, is an unregulated financial asset class – anything goes. But the simpler and more painful truth is that it’s abusive and usurious.
Many VCs have no moral center in what they invest in or what they’ll do to maximize their returns. On one hand the same venture capital industry that gave us Apple, Intel, Tesla, and SpaceX, also thinks addicting teens is a viable business model (Juul) or destroying democracy (Facebook) is a great investment. And instead of society shunning them, we celebrate them and their returns. We let the VC narrative of “all VC investments are equally good” equal “all investments are equally good for society.”
Why would any founder agree to this? No founder is prepared to watch their company crumble beneath them. There’s a growing sense of panic as you frantically work 100-hour weeks, knowing years of work are going to disappear unless you can find additional investment. You’re unable to sleep and trying not to fall into complete despair. Along comes an investor (often one of your existing ones) with a proposal to keep the company afloat and out of sheer desperation, you grab at it. You swallow hard when you hear the terms and realize it’s going to be a startup all over again. You rationalize that this is the only possible outcome, the only way to keep the company afloat.
But then there’s one more thing – to make it easier for you and a few key employees to swallow the cram down – they promise that you’ll get made whole again (by issuing you new stock) in the newly recapitalized company. Heck, all your prior investors, employees and advisors who trusted and bet on you get nothing, but you and a few key employees come out OK. All of a sudden the deal which seemed unpalatable is now sounding reasonable. You start rationalizing why this is good for everyone.
You just failed the ethical choice and forever ruined your reputation.
Cram downs wouldn’t exist without the founder’s agreement.
Stopping Cram Downs In the 20th century terrorists took hostages from many countries except from the Soviet Union. Why? Western countries would negotiate frantically with the terrorists and offer concessions, money, prisoner exchanges, etc. Seeing their success hostage taking continued. The Soviet Union? Terrorists took Russians hostages once. The Soviets sent condolences to the hostage families and never negotiated. Terrorists realized it was futile and focused on western hostages.
VCs will stop playing this game when founders stop negotiating.
You Have a Choice In the panic of finding money founders forget they have a choice. Walk away. Shut the company down and start another one. Stop rationalizing how bad a choice that is and convincing yourself that you’re doing the right thing. You’re not.
The odds are that after your new funding most of your employees will be left with little or nothing to show for their years of work. While a few cram downs have been turned around, (though I can’t think of any) given you haven’t found enough customers by now, the odds are you’re never going to be a successful enterprise. Your cram down investors will likely sell your technology for piece parts and/or use your company to benefit their other portfolio companies.
You think of the offer of cram down funding as a lifeline, but they’ve handed you a noose.
Time to Think With investors pressuring you and money running out, it’s easy to get so wound-up thinking that this is the only and best way out. If there ever was a time to pause and take a deep breath, it’s now. Realize you need time to put the current crisis in context and to visualize other alternatives. Take a day off and imagine what’s currently unimaginable – what would life be like after the company ends? What else have you always wanted to do? What other ideas do you have? Is now the time to reconnect with your spouse/family/others to decompress and get some of your own life back?
Don’t get trapped in your own head thinking you need to solve this problem by yourself. Get advice from friends, mentors and especially your early investors and advisors. There is nothing worse that guarantees you permanently ruin relationships (and your reputation) is for early investors and advisors to hear about your decision to take a cram down is when you ask them for signatures on a decision that’s already been made.
In the long run, your employees, and the venture ecosystem would be better served if you used your experience and knowledge in a new venture and took another shot at the goal.
Winners leave the field with those they came with.
Lessons Learned
Cram downs are done by VC bottom feeders
Taking an “unfair advantage” and contributing to the toxicity of the startup ecosystem
Founders often believe they need to take a cram down rationalizing “I’ll never have another good idea, I have so much time and effort sunk into this startups, I don’t have enough energy to do it again, etc.”
Founders rationalize it’s good for their employees
Our goal for the Secretary’s visit was to give her a snapshot of how we’re supporting the Department of Defense priority of building an innovation workforce. We emphasized the critical distinction between a technical STEM-trained workforce (which we need) and an innovation workforce which we lack at scale.
Innovation incorporates lean methodologies (customer discovery, problem understanding, MVPs, Pivots), coupled with speed and urgency, and a culture where failure equals rapid learning. All of these are accomplished with minimal resources to deploy at scale products/services that are needed and wanted. We pointed out that Silicon Valley and Stanford have done this for 50 years. And China is outpacing us by adopting the very innovation methods we invented, integrating commercial technology with academic research, and delivering it to the Peoples Liberation Army.
Therein lies the focus of our Gordian Knot Center —connect STEM with policy education and leverage the synergies between the two to develop innovative leaders who understand technology and policy and can solve problems and deliver solutions at speed and scale.
What We Presented A key component of the Gordian Knot Center’s mission is to prepare and inspire future leaders to contribute meaningfully as part of the innovation work force. We combine the unique strengths of Stanford and its location in Silicon Valley to solve problems across the spectrum of activities that create and sustain national power. The range of resources and capabilities we bring to the fight from the center’s unique position include:
The insights and expertise of Stanford international and national security policy leaders
The technology insights and expertise of Stanford Engineering
Exceptional students willing to help the country win the Great Power Competition
Silicon Valley’s deep commercial technology ecosystem
Our experience in rapid problem understanding, rapid iteration and deployment of solutions with speed and urgency
Access to risk capital at scale
In the six months since we founded the Gordian Knot Center we have focused on six initiatives we wanted to share with Secretary Hicks. Rather than Joe Felter and I doing all of the talking, 25 of our students, scholars, mentors and alumni joined us to give the Secretary a 3-5 minute precis of their work, spanning across all six of the Gordian Knot initiatives. Highlights of these presentations include:
Hacking for Defense Teams – Vannevar Labs, FLIP, Disinformatix
Throughout the over 90 minutes session, Dr. Hicks posed insightful questions for the students and told our gathering that one of her key priorities is to accelerate innovation adoption across DoD, including organizational structure, processes, culture, and people.
It was encouraging to hear the words.
However, from where we sit..
Our national security is now inexorably intertwined with commercial technology and is hindered by our lack of an integrated strategy at the highest level.
Our adversaries have exploited the boundaries and borders between our defense and commercial and economic interests.
Our current approaches – both in the past and current administration – to innovation across the government are piecemeal, incremental, increasingly less relevant and insufficient.
Listening to the secretary’s conversations, I was further reminded of how much of a radical reinvention of our civil/military innovation relationship is necessary if we want to keep abreast of our adversaries. This would use DoD funding, private capital, dual-use startups, existing prime contractors and federal labs in a new configuration. It would:
Create a new defense ecosystem encompassingstartups, scaleups at the bleeding edge, prime contractors as integrators of advanced technology, federally funded R&D centersrefocused on areas not covered by commercial tech (nuclear, hypersonics,…). Make it permanent by creating innovation doctrine/policy.
Createnew national champions in dual-use commercial tech – AI/ML, Quantum, Space, drones, high performance computing, next gen networking, autonomy, biotech, underwater vehicles, shipyards, etc. who are not the traditional vendors. Do this by picking winners. Don’t give out door prizes. Contracts should be >$100M so high- quality venture-funded companies will play. Until we have new vendors on the Major Defense Acquisition Program list, all we have in the DoD is innovation theater – not innovation.
Acquire at Speed. Today, the average DoD major acquisition program takes 9-26 years to get a weapon in the hands of a warfighter. We need a requirements, budgeting and acquisition process that operates at commercial speed (18 months or less) which is 10x faster than DoD procurement cycles. Instead of writing requirements, DoD should rapidly assess solutions and engage warfighters in assessing and prototyping commercial solutions.
Integrate and incent the Venture Capital/Private Equity ecosystem to invest at scale. Ask funders what it would take to invest at scale – e.g. create massive tax holidays and incentives to get investment dollars in technology areas of national interest.
Recruit and develop leaders across the Defense Department prepared to meet contempory threats and reorganize around this new innovation ecosystem. The DoD has world-class people and organization for a world that in many ways no longer exists. The threats, speed of change and technologies we face in this century will require radically different mindsets and approaches than those we faced in the 20th century. Today’s senior DoD leaders must think and act differently than their predecessors of a decade ago. Leaders at every level must now understand the commercial ecosystem and how to move with the speed and urgency that China is setting.
It was clear that Deputy Secretary Hicks understands the need for most of if not all these and more. Unfortunately, given the DoD budget is essentially fixed, creating new Primes and new national champions of the next generation of defense technologies becomes a zero-sum game. It’s a politically impossible problem for the Defense Department to solve alone. Changes at this scale will require Congressional action. Hard to imagine in the polarized political environment. But not impossible.