Your Computer May Already be Hacked – NSA Inside?

In a time of universal deceit – telling the truth is a revolutionary act.
George Orwell

In Russia, President Putin’s office just stopped using PC’s and switched to typewriters.  What do they know that we don’t?

Perhaps it’s Intel NSA inside.

———

For those of you who haven’t kept up, the National Security Agency (NSA’s) Prism program has been in the news. Prism provides the NSA with access to data on the servers of Microsoft, Google, Facebook, etc, extracting audio and video chats, photographs, e-mails, documents, etc.

Prism is just a part of the NSA’s larger mass electronic surveillance program that covers every possible path someone might use to communicate; tapping raw data as it flows through fiber optic cables and Internet peering points, copying the addressees on all letters you physically mail, all credit card purchases, your phone calls and your location (courtesy your smart phone.)Slide03

All hell broke loose when Edward Snowden leaked all this to press.

Given my talks on the Secret History of Silicon Valley I was interviewed on NPR about the disclosure that the NSA said they had a new capability that tripled the amount of Skype video calls being collected through Prism. Like most Americans I said, “I didn’t remember getting the memo that the 4th amendment to our constitution had been cancelled.”

But while the interviewer focused on the Skype revelation, I thought the most interesting part was the other claim, “that the National Security Agency already had pre-encryption stage access to email on Outlook.”  Say what??  They can see the plaintext on my computer before I encrypt it? That defeats any/all encryption methods. How could they do that?

Bypass Encryption
While most outside observers think the NSA’s job is cracking encrypted messages, as the Prism disclosures have shown, the actual mission is simply to read all communications. Cracking codes is a last resort.

Slide04

The NSA has a history of figuring out how to get to messages before or after they are encrypted. Whether it was by putting keyloggers on keyboards and recording the keystrokes or detecting the images of the characters as they were being drawn on a CRT.

Today every desktop and laptop computer has another way for the NSA to get inside.

Intel Inside
It’s inevitable that complex microprocessors have bugs in them when they ship. When the first microprocessors shipped the only thing you could hope is that the bug didn’t crash your computer. The only way the chip vendor could fix the problem was to physically revise the chip and put out a new version. But computer manufacturers and users were stuck if you had an old chip. After a particularly embarrassing math bug in 1994 that cost Intel $475 million, the company decided to fix the problem by allowing it’s microprocessors to load fixes automatically when your computer starts.

Slide05

Starting in 1996 with the Intel P6 (Pentium Pro) to today’s P7 chips (Core i7) these processors contain instructions that are reprogrammable in what is called microcode. Intel can fix bugs on the chips by reprogramming a microprocessors microcode with a patch. This patch, called a microcode update, can be loaded into a processor by using special CPU instructions reserved for this purpose. These updates are not permanent, which means each time you turn the computer on, its microprocessor is reset to its built-in microcode, and the update needs to be applied again (through a computer’s BIOS.).

Since 2000, Intel has put out 29 microcode updates to their processors. The microcode is distributed by 1) Intel or by 2) Microsoft integrated into a BIOS or 3) as part of a Windows update. Unfortunately, the microcode update format is undocumented and the code is encrypted. This allows Intel to make sure that 3rd parties can’t make unauthorized add-ons to their chips. But it also means that no one can look inside to understand the microcode, which makes it is impossible to know whether anyone is loading a backdoor into your computer.

The Dog That Never Barked
The NSA has been incredibly thorough in nailing down every possible way to tap into communications. Yet the one company’s name that hasn’t come up as part of the surveillance network is Intel. Perhaps they are the only good guys in the entire Orwellian mess.Slide07

Or perhaps the NSA, working with Intel and/or Microsoft, have wittingly have put backdoors in the microcode updates. A backdoor is is a way of gaining illegal remote access to a computer by getting around the normal security built-in to the computer. Typically someone trying to sneak malicious software on to a computer would try to install a rootkit (software that tries to conceal the malicious code.) A rootkit tries to hide itself and its code, but security conscious sites can discover rootkits by tools that check kernel code and data for changes.

But what if you could use the configuration and state of microprocessor hardware in order to hide? You’d be invisible to all rootkit detection techniques that checks the operating system. Or what if you can make the microprocessor random number generator (the basis of encryption) not so random for a particular machine? (The NSA’s biggest coup was inserting backdoors in crypto equipment the Swiss sold to other countries.)

Rather than risk getting caught messing with everyone’s updates, my bet is that the NSA has compromised the microcode update signing keys  giving the NSA the ability to selectively target specific computers. (Your operating system ensures security of updates by checking downloaded update packages against the signing key.) The NSA then can send out backdoors disguised as a Windows update for “security.” (Ironic but possible.)

That means you don’t need backdoors baked in the hardware, don’t need Intel’s buy-in, don’t have discoverable rootkits, and you can target specific systems without impacting the public at large.

Two Can Play the Game
A few months ago these kind of discussions would have been theory at best, if not paranoia. Slide09The Prism disclosures prove otherwise – the National Security Agency has decided it needs the ability to capture all communications in all forms. Getting inside of a target computer and weakening its encryption or having access to the plaintext of encrypted communication seems likely. Given the technical sophistication of the other parts of their surveillance net, the surprise would be if they haven’t implemented a microcode backdoor.

The downside is that 1) backdoors can be hijacked by others with even worse intent. So if NSA has a microcode backdoor – who else is using it? and 2) What other pieces of our infrastructure, (routers, smartphones, military computers, satellites, etc) use processors with uploadable microcode?

——

And that may be why the Russian president is now using a typewriter rather than a personal computer.

Putin's TypewriterUpdate: I asked Intel:

  • Has Intel received any National Security Letters?
  • If you had received a National Security Letter would you be able to tell us that you did?
  • has Intel ever been contacted by anyone in the U.S. government about Microcode Updates or the signing keys?
  • Does anyone outside of Intel have knowledge of the Microcode Updates format or the signing keys?
  • Does anyone outside of Intel have access to the Microcode Updates or the signing key

Intel’s response from their Director of Corporate and Legal Affairs (italics mine):

“First, I have no idea whether we’ve ever received a National Security Letter and don’t intend on spending any time trying to find out.  It’s not something we would talk about in any case, regardless of the subject of your blog.

Second, the questions related microcode and the speculative portion of your blog related to our encryption of microcode and the key all seem to focus around one question:  Do we have backdoors available as a result of our microcode download encryption scheme?
The answer is NO.  Only Intel has that knowledge.”

Update 2:  A much better description of the problem was actually presented a year ago at Defcon

if you can’t see the presentation above click here

Listen to the post here: Or download the podcast here

The Endless Frontier: U.S. Science and National Industrial Policy (part 1)

The U.S. has spent the last 70 years making massive investments in basic and applied research. Government funding of research started in World War II driven by the needs of the military for weapon systems to defeat Germany and Japan. Post WWII the responsibility for investing in research split between agencies focused on weapons development and space exploration (being completely customer-driven) and other agencies charted to fund basic and applied research in science and medicine (being driven by peer-review.)

The irony is that while the U.S. government has had a robust national science and technology policy, it lacks a national industrial policy; leaving that to private capital. This approach was successful when U.S. industry was aligned with manufacturing in the U.S., but became much less so in the last decade when the bottom-line drove industries offshore.

In lieu of the U.S. government’s role in setting investment policy, venture capital has set the direction for what new industries attract capital.

This series of blog posts is my attempt to understand how science and technology policy in the U.S. began, where the money goes and how it has affected innovation and entrepreneurship. In future posts I’ll offer some observations how we might rethink U.S. Science and National Industrial Policy as we face the realities of China and global competition.

Office of Scientific Research and Development – Scientists Against Time
As World War II approached, Vannevar Bush, the ex-dean of engineering at MIT, single-handledly reengineered the U.S. governments approach to science and warfare. Bush predicted that World War II would be the first war won or lost on the basis of advanced technology. In a major break from the past, Bush believed that scientists from academia could develop weapons faster and better if scientists were kept out of the military and instead worked  in civilian-run weapons labs. There they would be tasked to develop military weapons systems and solve military problems to defeat Germany and Japan. (The weapons were then manufactured in volume by U.S. corporations.)

In 1940 Bush proposed this idea to President Roosevelt who agreed and appointed Bush as head, which was first called the National Defense Research Committee and then in 1941 the Office of Scientific Research and Development (OSRD).

OSRD divided the wartime work into 19 “divisions”, 5 “committees,” and 2 “panels,” each solving a unique part of the military war effort. These efforts spanned an enormous range of tasks – the development of advanced electronics; radar, rockets, sonar, new weapons like proximity fuse, Napalm, the Bazooka and new drugs such as penicillin and cures for malaria.

OSRD

The civilian scientists who headed the lab’s divisions, committees and panels were given wide autonomy to determine how to accomplish their tasks and organize their labs. Nearly 10,000 scientists and engineers received draft deferments to work in these labs.

One OSRD project – the Manhattan Project which led to the development of the atomic bomb – was so secret and important that it was spun off as a separate program. The University of California managed research and development of the bomb design lab at Los Alamos while the US Army managed the Los Alamos facilities and the overall administration of the project. The material to make the bombs – Plutonium and Uranium 235 – were made by civilian contractors at Hanford Washington and Oak Ridge Tennessee.

OSRD was essentially a wartime U.S. Department of Research and Development. Its director, Vannever Bush became in all but name the first presidential science advisor. Think of the OSRD as a combination of all of today’s U.S. national research organizations – the National Science Foundation (NSF), National Institute of Health (NIH), Centers for Disease Control (CDC), Department of Energy (DOE) and a good part of the Department of Defense (DOD) research organizations – all rolled into one uber wartime research organization.

OSRD’s impact on the war effort and the policy for technology was evident by the advanced weapons its labs developed, but its unintended consequence was the impact on American research universities and the U.S. economy that’s still being felt today.

National Funding of University Research
Universities were started with a mission to preserve and disseminate knowledge. By the late 19th century, U.S. universities added scientific and engineering research to their mission. However, prior to World War II corporations not universities did most of the research and development in the United States. Private companies spent 68% of U.S. R&D dollars while the U.S. Government spent 20% and universities and colleges accounted just for 9%, with most of this coming via endowments or foundations.

Before World War II, the U.S. government provided almost no funding for research inside universities. But with the war, almost overnight, government funding for U.S. universities skyrocketed. From 1941-1945, the OSRD spent $450 million dollars (equivalent to $5.5 billion today) on university research. MIT received $117 million ($1.4 billion in today’s dollars), Caltech $83 million (~$1 billion), Harvard and Columbia ~$30 million ($370 million.) Stanford was near the bottom of the list receiving $500,000 (~$6 million). While this was an enormous sum of money for universities, it’s worth putting in perspective that ~$2 billion was spent on the Manhattan project (equivalent to ~$25 billion today.)OSRD and Universities

World War II and OSRD funding permanently changed American research universities. By the time the war was over, almost 75% of government research and development dollars would be spent inside Universities. This tidal wave of research funds provided by the war would:

  • Establish a permanent role for U.S. government funding of university research, both basic and applied
  • Establish the U.S. government – not industry, foundations or internal funds – as the primary source of University research dollars
  • Establish a role for government funding for military weapons research inside of U.S. universities (See the blog posts on the Secret History of Silicon Valley here, and for a story about one of the University weapons labs here.)
  • Make U.S. universities a magnet for researchers from around the world
  • Give the U.S. the undisputed lead in a technology and innovation driven economy – until the rise of China.

The U.S. Nationalizes Research
As the war drew to a close, university scientists wanted the money to continue to flow but also wanted to end the government’s control over the content of research. That was the aim of Vannevar Bush’s 1945 report, Science: the Endless Frontier. Bush’s wartime experience convinced him that the U.S. should have a policy for science. His proposal was to create a single federal agency – the National Research Foundation – responsible for funding basic research in all areas, from medicine to weapons systems. He proposed that civilian scientists would run this agency in an equal partnership with government. The agency would have no laboratories of its own, but would instead contract research to university scientists who would be responsible for all basic and applied science research.

But it was not to be. After five years of post-war political infighting (1945-1950), the U.S. split up the functions of the OSRD.  The military hated that civilians were in charge of weapons development. In 1946 responsibility for nuclear weapons went to the new Atomic Energy Commission (AEC). In 1947, responsibility for basic weapons systems research went to the Department of Defense (DOD). Medical researchers who had already had a pre-war National Institutes of Health chafed under the OSRD that lumped their medical research with radar and electronics, and lobbied to be once again associated with the NIH. In 1947 the responsibility for all U.S. biomedical and health research went back to the National Institutes of Health (NIH). Each of these independent research organizations would support a mix of basic and applied research as well as product development.

The End of OSRD

Finally in 1950, what was left of Vannevar Bush’s original vision – government support of basic science research in U.S. universities – became the charter of the National Science Foundation (NSF).  (Basic research is science performed to find general physical and natural laws and to push back the frontiers of fundamental understanding. It’s done without thought of specific applications towards processes or products in mind. Applied research is systematic study to gain knowledge or understanding with specific products in mind.)

Despite the failure of Bush’s vision of a unified national research organization, government funds for university research would accelerate during the Cold War.

Coming in Part 2 – Cold War science and Cold War universities.

Lessons Learned

  • Large scale federal funding for U.S. science research started with the Office of Scientific Research and Development (OSRD) in 1940
  • Large scale federal funding for American research universities began with OSRD in 1940
  • In exchange for federal science funding, universities became partners in weapons systems research and development

Listen to the post here: Download the Podcast here

Startup Communities – Building Regional Clusters

How to build regional entrepreneurial communities has just gotten it’s first “here’s how to do it” book. Brad Feld’s new book Startup Communities joins the two other “must reads,” (Regional Advantage and Startup Nation) and one “must view” (The Secret History of Silicon Valley) for anyone trying to understand the components of a regional cluster.

There’s probably no one more qualified to write this book then Brad Feld (startup founder, co founder of two VC firms – Mobius and Foundry, and founder of TechStars.)

Leaders and Feeders
Feld’s thesis is that unlike the common wisdom, it is entrepreneurs that lead a startup community while everyone else feeds the community.

Feld describes the characteristics of those who want to be regional Entrepreneurial Leaders; they need to be committed to their region for the long term (20+ years), the community and its leaders must be inclusive, play a non-zero sum game, be mentorship-driven and be comfortable experimenting and failing fast.

Feeders include the government, universities, investors, mentors, service providers and large companies. He points out that some of these, government, universities and investors think of themselves as the leaders and Feld’s thesis is that we’ve gotten it wrong for decades.

This is a huge insight, a big idea and a fresh way to view and build a regional ecosystem in the 21st century. It may even be right.

Activities and Events
One of the most surprising (to me) was the observation that a regional community must have continual activities and events to engage all participants. Using Boulder Colorado as an example, (Feld’s home town) this small entrepreneurial community runs office hours, Boulder Denver Tech Meetup, Boulder Open Coffee ClubIgnite Boulder, Boulder Beta, Boulder Startup Digest, Startup Weekend events, CU New Venture Challenge, Boulder Startup Week, Young Entrepreneurs Organization and the Entrepreneurs Foundation of Colorado. For a city of 100,000 (in a metro area of just 300,000 people) the list of activities/events in Boulder takes your breath away. They are not run by the government or any single organization. These are all grassroots efforts by entrepreneurial leaders. These events are a good proxy for the health and depth of a startup community.

Incubators and Accelerators
One of the best definitions in the book is when Feld articulates the difference between an incubator and an accelerator. An incubator provides year-round physical space, infrastructure and advice in exchange for a fee (often in equity.) They are typically non-profit, attached to a university (or in some locations a local government.) For some incubators, entrepreneurs can stay as long as they want. There is no guaranteed funding. In contrast, an accelerator has cohorts going through a program of a set length, with funding typically provided at the end.

Feld describes TechStars (founded in 2006 with David Cohen) as an example of how to build a regional accelerator. In contrast to other accelerators TechStars is mentor-driven, with a profound belief that entrepreneurs learn best from other entrepreneurs. It’s a 90-day program with a clear beginning and end for each cohort. TechStars selection criteria is to first focus on picking the right team then the market. They invest $118,000 ($18k seed funding + $100K convertible note) in 10 teams per region.

Role of Universities
To the entrepreneurial community Stanford and MIT are held up as models for “outward-facing” research universities. They act as community catalysts, as a magnet for great entrepreneurial talent for the region, and as teachers and then a pipeline for talent back into the region. In addition their research offers a continual stream of new technologies to be commercialized.

Feld’s observation is that that these schools are exceptions that are hard to duplicate. In most universities entrepreneurial engagement is not rewarded, there’s a lack of resources for entrepreneurial programs and cross-campus collaboration is not in the DNA of most universities.

Rather than thinking of the local university as the leader, Feld posits a more effective approach is to use the local college or university as a resource and a feeder of entrepreneurial students to the local entrepreneurial community. He uses Colorado University’ Boulder as an example of of a regional university being as inclusive as possible with courses, programs and activities.

Finally, he suggests engaging alumni for something other than fundraising – bringing back to the campus, having them mentor top students and celebrating their successes.

Role of Government
Feld is not a big fan of top-down government driven clusters. He contrasts the disconnect between entrepreneurs and government. Entrepreneurs are painfully self-aware but governments are chronically not self-aware.  This makes government leaders out of touch on how the dynamics of startups really work. Governments have a top-down command and control hierarchy, while entrepreneurs work in a bottoms-up networked world. Governments tend to focus on macro metrics of economic development policy while entrepreneurs talk about lean, startups, people and product. Entrepreneurs talk about immediate action while government conversations about policy do not have urgency.  Startups aim for immediate impact, while governments want to control. Startup communities are networked and don’t lend themselves to a command and control system.

Community Culture
Feld believes that the Community Culture, how individuals interact and behave to each other, is a key part of defining and entrepreneurial community. His list of cultural attributes is an  integral part of Silicon Valley. Give before you get, (in the valley we call this the “pay it forward” culture.) Everyone is a mentor, so share your knowledge and give back. Embrace weirdness, describes a community culture that accepts differences. (Starting post World War II the San Francisco bay area became a magnet for those wanting to embrace alternate lifestyles. For personal lifestyles people headed to San Francisco. For alternate business lifestyles they went 35 miles south to Silicon Valley.)

I was surprised to note that the biggest cultural meme of Silicon Valley didn’t make his Community Culture chapter – failure equals experience.

Broadening the Startup Community
Feld closes by highlighting some of the issues faced by a startup community in Boulder.  The one he calls Parallel Universes notes that there may be industry specific (biotech, clean tech etc.) startup communities sitting side-by-side and not interacting with each other.

He then busts the myths clusters tell themselves; “lets be like Silicon Valley” and the “there’s not enough capital here.”

Quibbles
There’s data that that seems to indicate a few of Feld’s claims about about the limited role of venture, universities and governments might be overly broad (but doesn’t diminish his observation that they’re feeders not leaders.) In addition, while Silicon Valley was a series of happy accidents, other national clusters have extracted its lessons and successfully engineered on top of those heuristics. And while I might have misread Feld’s premise about local venture capital, but it seems to be, “if there isn’t a robust venture capital in your region it’s because there isn’t a vibrant entrepreneurial community with great startups. As venture capital exists to service startup when great startups are built investors will show up.” Wow.

Finally, local government top-down initiatives are not the only way governments can incentivize entrepreneurial efforts. Some like the National Science Foundation Innovation Corps have had a big bang for little bucks.

Summary
Entrepreneurship is rising in almost every major city and region around the world. I host at least one region a week at the ranch and each of these regions are looking for a roadmap. Startup Communities is it. It’s a strategic, groundbreaking book and a major addition to what was missing in the discussion of how to build a regional cluster. I’m going to be quoting from it liberally, stealing from it often, and handing it out to my visitors.

Buy it.

Lessons Learned

  • Entrepreneurs lead a startup community while everyone else feeds the community
  • Feeders include the government, universities, investors, mentors, service providers and large companies
  • Continual activities and events are essential to engage all participants
  • Top-down government-driven clusters are an oxymoron
  • Building a regional entrepreneurial culture is critical

Listen to the post here: Download the Podcast here

The Pay-It-Forward Culture

Foreign visitors to Silicon Valley continually mention how willing we are to help, network and connect strangers.  We take it so for granted we never even to bother to talk about it.  It’s the “Pay-It-Forward” culture.

——-

We’re all in this together – The Chips are Down
in 1962 Walker’s Wagon Wheel Bar/Restaurant in Mountain View became the lunch hangout for employees at Fairchild Semiconductor.

When the first spinouts began to leave Fairchild, they discovered that fabricating semiconductors reliably was a black art. At times you’d have the recipe and turn out chips, and the next week something would go wrong, and your fab couldn’t make anything that would work. Engineers in the very small world of silicon and semiconductors would meet at the Wagon Wheel and swap technical problems and solutions with co-workers and competitors.

We’re all in this together – A Computer in every Home
In 1975 a local set of hobbyists with the then crazy idea of a computer in every home formed the Homebrew Computer Club and met in Menlo Park at the Peninsula School then later at the Stanford AI Lab. The goal of the club was: “Give to help others.” Each meeting would begin with people sharing information, getting advice and discussing the latest innovation (one of which was the first computer from Apple.) The club became the center of the emerging personal computer industry.

We’re all in this together – Helping Our Own
Until the 1980’s Chinese and Indian engineers ran into a glass ceiling in large technology companies held back by the belief that “they make great engineers but can’t be the CEO.”  Looking for a chance to run their own show, many of them left and founded startups. They also set up ethnic-centric networks like TIE (The Indus Entrepreneur) and the Chinese Software Professionals Association where they shared information about how the valley worked as well as job and investment opportunities. Over the next two decades, other groups — Russian, Israeli, etc. — followed with their own networks. (Anna Lee Saxenian has written extensively about this.)

We’re all in this together – Mentoring The Next Generation
While the idea of groups (chips, computers, ethnics) helping each other grew, something else happened. The first generation of executives who grew up getting help from others began to offer their advice to younger entrepreneurs. These experienced valley CEOs would take time out of their hectic schedule to have coffee or dinner with young entrepreneurs and asking for nothing in return.

They were the beginning of the Pay-It-Forward culture, the unspoken Valley culture that believes “I was helped when I started out and now it’s my turn to help others.”

By the early 1970’s, even the CEOs of the largest valley companies would take phone calls and meetings with interesting and passionate entrepreneurs. In 1967, when he was 12 years old Steve Jobs called up Bill Hewlett the co-founder of HP.

In 1975, when Jobs was a young unknown, wannabe entrepreneur called the Founder/CEO of Intel, Bob Noyce and asked for advice. Noyce liked the kid, and for the next few years, Noyce met with him and coached him as he founded his first company and went through the highs and lows of a startup that caught fire.

Steve Jobs and Robert Noyce

Bob Noyce took me under his wing, I was young, in my twenties. He was in his early fifties. He tried to give me the lay of the land, give me a perspective that I could only partially understand,” Jobs said, “You can’t really understand what is going on now unless you understand what came before.”

What Are You Waiting For?
Last week in Helsinki Finland at a dinner with a roomful of large company CEO’s, one of them asked, ”What can we do to help build an ecosystem that will foster entrepreneurship?” My guess is they were expecting me talk about investing in startups or corporate partnerships. Instead, I told the Noyce/Jobs story and noted that, as a group, they had a body of knowledge that entrepreneurs and business angels would pay anything to learn. The best investment they could make to help a startup culture in Finland would be to share what they know with the next generation. Even more, this culture could be created by a handful of CEO’s and board members who led by example. I suggested they ought to be the ones to do it.

We’ll see if they do.

——

Over the last half a century in Silicon Valley, the short life cycle of startups reinforced the idea that – the long term relationships that lasted was with a network of people – much larger than those in your current company. Today, in spite of the fact that the valley is crawling with IP lawyers, the tradition of helping and sharing continues. The restaurants and locations may have changed, moving from Rickey’s Garden Cafe, Chez Yvonne, Lion and Compass and Hsi-Nan to Bucks, Coupa Café and Café Borrone, but the notion of competitors getting together and helping each other and experienced business execs offering contacts and advice has continued for the last 50 years.

It’s the “Pay-It-Forward” culture.

Lessons Learned

  • Entrepreneurs in successful clusters build support networks outside of existing companies
  • These networks can be around any area of interest (technology, ethnic groups, etc.)
  • These were mutually beneficial –  you learned and contributed to help others
  • Over time experienced executives “pay-back” the help they got by mentoring others
  • The Pay-It-Forward culture makes the ecosystem smarter

Listen to the post here: Download the Podcast here

The Internet Might Kill Us All

My friend Ben Horowitz and I debated the tech bubble in The Economist. An abridged version of this post was the “closing” statement to Ben’s rebuttal comments. Part 1 is here and Part 2 here.  The full version is below.

—————————————————
It’s been fun debating the question, “Are we in a tech bubble?” with my colleague Ben Horowitz. Ben and his partner Marc Andreessen (the founder of Netscape and author of the first commercial web browser on the Internet) are the definition of Smart Money. Their firm, Andreessen/Horowtiz, has been prescient enough to invest in social networks, consumer and mobile applications and the cloud long before others. They understood the ubiquity, pervasiveness and ultimate profitability of these startups and doubled-down on their investments.

My closing arguments are below. I’ve followed them with a few observations about the Internet that may help frame the scope of the debate.

Are we in the beginnings of a tech bubble – yes.
Prices for both private and public tech valuations exceed any rational valuation to their current worth. In 5 to 10 years most of them will be worth a fraction of their IPO price.  A few will be worth much, much more.

Is this tech bubble as broad as the 1995-2000 dot.com bubble – no.
While labeled the “dot.com” bubble, valuations went crazy across a wide range of technology sectors including telecommunications, enterprise software and biotech, not just the Internet.

Are tech bubbles necessarily bad – no.
A bubble is simply the redistribution of wealth from Marks to the Smart Money and Promoters. I hypothesize that unlike bubbles in other sectors  – tulips, Florida land prices, housing, financial – tech bubbles create lasting value. They finance companies that invest in new technologies, new ideas and new products. And it appears that at least in Silicon Valley, a larger percentage of money made in the last tech bubble is recirculated back into investments into the next generation of tech startups.

While most of the social networks, cloud computing, web and mobile app companies we see today will fail, a few will literally remake our lives.

Here are two views how.

The Internet May Liberate Us
In the last year, we’ve seen Social Networks enable new forms of peaceful revolution. To date, the results of Twitter and Facebook are more visible on the Arab Street than Wall Street.

One of the most effective weapons in the Cold War was the mimeograph machine and the VCR. The ability to copy and disseminate banned ideas undermined repressive regimes from Poland to Iran to the Soviet Union.

In the 21st century, authoritarian governments still fear their own people talking to each other and asking questions. When governments shut down Google, Twitter, Facebook, et al, they are building the 21st century equivalent of the Berlin Wall. They are admitting to the world that the forces of oppression can’t stand up to 140 characters of the truth.

When these governments build “homegrown” versions of these apps, the Orwellian prophecy of the Ministry of the Truth lives in each distorted or missing search result. Absent war, these regimes eventually collapse under their own weight. We can help accelerate their demise by building tools which allow people in these denied areas access to the truth.

Yet the same set of tools that will free hundreds of millions of people may end their lives in minutes.

The Internet May Kill Us
The next war will more than likely occur via the Internet. It may be over in minutes. We may be watching the first skirmishes.

In the 20th century, the economies of first-world countries became dependent on a reliable supply of food, water, electricity, transportation and telephone. Part of waging war was destroying that physical infrastructure. (The Combined Bomber Offensive of Germany and occupied Europe during WWII was designed to do just that.)

In the last few years, most first world countries have become dependent on the Internet as one of those critical parts of our infrastructure. We use the net in four different ways: 1) to control the physical infrastructure we built in the 20th century (food, water, electricity, transportation and communications); 2) as the network for our military interconnecting all our warfighting assets, from the mundane of logistics to command and control systems, weapons systems and targeting systems; 3) as commercial assets that exist or can operate only if the net exists including communication tools (email, Facebook, Twitter, etc.) and corporate infrastructure (Cloud storage and apps); 4) for our banking and financial systems.

Every day hackers demonstrate how weak the security of our corporate and government resources are. Stealing millions of credit cards occurs on a regular basis. Yet all of these are simply crimes not acts of war.

The ultimate in asymmetric warfare
In the 20th century, the United States was continually unprepared for an adversary using asymmetric warfare — the Japanese attack on Pearl Harbor, Soviet Anthrax warheads on their ICBMs during the cold war, Vietnam and guerilla warfare, and the 9/11 attacks.

While hacker attacks against banks and commercial institutions make good press, the most troubling portents of the next war were the Stuxnet attack on the Iranian centrifuge facilities, the compromise of the RSA security system and the penetration of American defense contractors. These weren’t Lulz or Anonymous hackers, these were attacks by government military projects with thousands of programmers coordinating their efforts. All had a single goal in mind: to prepare to use the internet to destroy a country without physically killing its people.

Our financial systems (banks, stock market, credit cards, mortgages, etc.) exist as bits.  Your net worth and mine exists because there are financial records that tell us how many “dollars” (or Euros, Yen, etc.) we own. We don’t physically have all that money. It’s simply the sum of the bits in a variety of institutions.

An attack on the United States could begin with the destruction of all those financial records. (A financial institution that can’t stop criminal hackers would have no chance against a military attack to destroy the customer data in their systems. Because security is expensive, hard, and at times not user friendly, the financial services companies have fought any attempt to mandate hardened systems.) Logic bombs planted on those systems will delete all the backups once they’re brought on-line. All of it gone.  Forever.

At the same time, all cloud-based assets, all companies applications and customer data will be attacked and deleted. All of it gone.  Forever.

Major power generating turbines will be attacked the same way Stuxnet worked– over and under-speeding the turbines and rapidly cycling the switching systems until they burn out.  A major portion of our electrical generation capacity will be off-line until replacements can be built. (They are currently built in China.)

Our transportation infrastructure– air traffic control systems, airline reservations, package delivery companies– will be hacked and our GPS infrastructure will be taken down (hacked, jammed or physically attacked.)

While some of our own military systems are hardened, attackers will shut down the soft parts of the military logistics and communications systems. Since our defense contractors have been the targets of some of the latest hacks, our newest weapons systems may not work, or worse if used, may have been reprogrammed to destroy our own assets.

An attacker may try to mask its identity by making the attack appear to come from a different source. With our nation in an unprecedented economic collapse, our ability to retaliate militarily against a nuclear-armed opponent claiming innocence and threatening a response while we face them with unreliable weapons systems could make for a bad day. Our attacker might even offer economic assistance as part of the surrender terms.

These scenarios make the question, “Are we in a tech bubble?” seem a bit ironic.

It Doesn’t Have to Happen
During the Cold War the United States and the Soviet Union faced off with an arsenal of strategic and tactical nuclear weapons large enough to directly kill hundreds of millions of people and plunge the planet in a “Nuclear Winter,” which could have killed billions more. But we didn’t do it. Instead, today the McDonalds in plazas labeled “Revolutionary Square” has been the victory parade for democracy and capitalism.

It may be that we will survive the threat of a Net War like we did the Cold War and that the Internet turns out to be the birth of a new spring for us all.
Listen to the post here Download the Podcast here

Panic at the Pivot – Aligning Incentives By Burning the Boats

It’s a paradox, but early sales success in a startup can kill its chances of becoming a large successful company. The cause is often sales and marketing execs who’ve become too comfortable with an initial sales model and panic at the first sign of a Pivot. As a result they block new iterations of the business model that might take the company to the next level.

Fairchild
As I was reading a history of the startup years of Fairchild Semiconductor, I realized that a problem I thought was new – sales as an obstacle to Pivots – had occurred 50 years ago at the dawn of what would become Silicon Valley.

Fairchild, the first successful semiconductor company in the valley, was founded on two technical innovations: manufacturing transistors out of Silicon instead of the then conventional Germanium, and using a diffusion manufacturing process which enabled the production of silicon mesa transistors in batches on assembly line. (While this might sound like Greek to you, it was a revolution.)

Early on, the young company made a dramatic technical pivot when it discovered a way to build silicon planar transistors that dramatically improved reliability. (This was an even bigger revolution.) This increased reliability qualified Fairchild’s transistors for military weapons systems (airborne electronics, missile guidance systems, etc.) With orders from military subcontractors arming the cold war, Fairchild’s sales skyrocketed from $500K in 1958  to $7M in 1959 to $21M in 1960.

By the end of 1960, Fairchild was at the top of its game. In less than three years from the day it started, the company had pivoted its technology process, sales had done a masterful job of Customer Discovery and had found a sweet spot in the market and its fabrication plants were busy turning out as many transistors and diodes as they could make.

What could go wrong?

It was when engineering Pivoted again. And this time sales revolted.

The Revolution Will Not Be Televised
When Fairchild engineers realized that its planar process of manufacturing individual transistors could now be connected together on a single piece of silicon, the Integrated Circuit was born. Engineering thought this could dramatically change the way electronic systems were built, but the head of sales tried to kill the Integrated Circuit program, loudly and vociferously. Engineering was confused, why didn’t the Fairchild salesforce want a revolutionary new product line?

Over My Dead Body
From the point of view of the sales organization this new family of integrated circuits were a major distraction. The Fairchild sales team was on a roll executing a known business model – selling planar diodes and transistors into an existing market. In the transistor market, the problem was known, the customer was known and the basis of competition was known (technical features, price and delivery schedule.)

Integrated circuits were different. Unlike transistors, no one in 1960 was clamoring for the new technology. Integrated circuits were a new market. It wasn’t clear exactly what problem the product would solve, or who the customer was. In fact, the most likely customers, computer designers were openly hostile as they saw integrated circuits doing what they were supposed to be doing – designing circuits.  So selling integrated circuits meant a search for a business model.

This meant that a high testosterone sales team that was busy “executing” as order takers and deal makers had to put on a different hat and become educators and consultative engineers.  No way.

You Get What You Incent
What the engineers also didn’t know is that the head of Sales of Fairchild had cut a great deal on his compensation package. He was paid 1% of gross sales. While this made sense in the first few years when Fairchild was a startup, now it had unintended consequences. His salesmen were also compensated on a commission basis. Why would they want a product they had to force customers to take when they had existing products that were making them rich?

The VP of sales’ incentives led him to stifle any innovation that got in the way of selling as much of the current technology as he could – even if it meant killing the future of the company. Luckily for Fairchild and the future of the semiconductor and computer business, he quit when his compensation plan was changed.

The Land of the Living Dead
I see this same pattern in early stage startups. Early sales look fine, but often plateau. Engineering comes into a staff meeting with several innovative ideas and the head of sales and/or marketing shoot them down with the cry of “It will kill our current sales.”

The irony is that “killing our current sales” is often what you need to do. Most startups don’t fail outright, they end up in “the land of living dead” where sales are consistently just OK but never breakout into a profitable and scalable company. This is usually due to a failure of the CEO and board in forcing the entire organization to Pivot. The goal of a scalable startup isn’t optimizing the comp plan for the sales team but optimizing the long-term outcome of the company. At times they will conflict. And startup CEO’s need a way to move everyone out of their comfort zone to the bigger prize.

Burn The Boats
In 1519 Hernando Cortes landed in the Yucatan peninsula to conquer the Aztec Empire and bring their treasure back to Spain. His small army arrived in 11 boats. As they landed Cortes solved the problem of getting his team focused on what was ahead of them – he ordered them to burn the boats they came in. Now the only way home was to succeed in their new venture or die.

Pivots that involve radical changes to the business model may at times require burning the boats at the shore.

——

Every chip company in Silicon Valley is descended from Fairchild.

Lessons Learned

  • Sales organizations may get too comfortable to early.
  • Sales execs execute to their compensation plans.
  • Pivots are not subject to a vote in the exec staff meeting.
  • CEO’s and their boards make the Pivot decisions.
  • To force a Pivot burn the boats at the shore.

Listen to post here: Download the Podcast here

The Secret History of Silicon Valley Part 15: Agena – The Secret Space Truck, Ferret’s and Stanford

This post is the latest in the “Secret History Series.”  They’ll make much more sense if you read some of the earlier ones for context. See the Secret History video and slides as well as the bibliography for sources and supplemental reading.

————

By the early 1960’s Lockheed Missiles Division in Sunnyvale was quickly becoming the largest employer in what would be later called Silicon Valley.  Along with its publically acknowledged contract to build the Polaris Submarine Launched Ballistic Missile (SLBM,) Lockheed was also secretly building the first photo reconnaissance satellites (codenamed CORONA) for the CIA in a factory in East Palo Alto.

It was only a matter of time before Stanford’s Applied Electronics Lab research on Electronic and Signals Intelligence and Lockheed’s missiles and spy satellites intersected. Here’s how.

Lockheed Agena

Thor/AgenaD w/Corona

In addition to the CORONA CIA reconnaissance satellites, Lockheed was building another assembly line, this one for the Agena – a space truck.  The Agena sat on top of a booster rocket (first the Thor, then the Altas and finally the Titan) and had its own rocket engine that would help haul the secret satellites into space. The engine (made by Bell Aerosystems) used storable hypergolic propellants so it could be restarted in space to change the satellite’s orbit.  Unlike other second stage rockets, once in orbit, the CORONA reconnaissance satellite would stay attached to the Agena which stabilized the satellite, pointed it in the right location, and oriented it in the right direction to send its recovery capsule on its way back to earth.

The Agena would be the companion to almost all U.S. intelligence satellites for the next decade.  Three different models were built and for over a decade nearly four hundred of them (at the rate of three a month) would be produced on an assembly line in Sunnyvale, and tested in Lockheed’s missile test base in the Santa Cruz mountains.

Agena Ferrets – Program 11
As Lockheed engineers gained experience with the Agena and the CORONA photo reconnaissance satellite, they realized that they had room on a rack in the back of the Agena to carry another payload (as well as the extra thrust to lift it into space.) By the summer of 1962, Lockheed proposed a smaller satellite that could be deployed from the rear of the Agena. This subsatellite was called Program 11, or P-11 for short.  The P-11 subsatellite weighed up to 350lbs, had its own solid rockets to boost it into different orbits, solar arrays for power and was stabilized by either deploying long booms or by spinning 60-80 times a second.

Agena Internals

And they had a customer who couldn’t wait to use the space.  While the CORONA reconnaissance satellites were designed to take photographs from space, putting a radar receiver on a satellite would be enable it to receive, record and locate Soviet radars deep
inside the Soviet Union. For the first time, the National Security Agency (working through the National Reconnaissance Office) and the U.S. Air Force could locate radars which would threaten our manned bombers as well as those that might be part of an anti-ballistic missile system.  Most people thought the idea was crazy. How could you pick up a signal so faint while the satellite was moving so rapidly? Could you sort out one radar signal from all the other noise? There was one way to find out. Build the instruments and have them piggyback on the Agena/CORONA photo reconnaissance satellites.

But who could quickly build these satellites to test this idea?

Stanford and Ferrets
Just across the freeway from Lockheed’s secret CORONA assembly plant in Palo Alto, James de Broekert was at Stanford Applied Electronics Laboratory. This was the Lab founded by Fred Terman from his WWII work in Electronic Warfare.

“This was an exciting opportunity for us,” de Broekert remembered. “Instead of flying at 10,000 or 30,000 feet, we could be up at 100 to 300 miles and have a larger field of view and cover much greater geographical area more rapidly. The challenges were establishing geolocation and intercepting the desired signals from such a great distance. Another challenge was ensuring that the design was adapted to handle the large number of signals that would be intercepted by the satellite. We created a model to determine the probability of intercept on the desired and the interference environment from the other radar signals that might be in the field of view, de Broekert explained.

“My function was to develop the system concept and to establish the system parameters. I was the team leader, but the payloads were usually built as a one-man project with one technician and perhaps a second support engineer. Everything we built at Stanford was essentially built with stockroom parts. We built the flight-ready items in the laboratory, and then put them through the shake and shock fall test and temperature cycling…”

Agena and Ferret Subsatellite credit: USAF

Like the cover story for the CORONA (which called them Discoverer scientific research satellites,) the first three P-11 satellites were described as “science” missions with results published in the Journal of Geophysical Research.

Just fifteen years after Fred Terman had built Electronic Intelligence and Electronic Warfare systems for bombers over Nazi Germany, Electronic Intelligence satellites were being launched in space to spy on the Soviet Union.

Close to 50 Ferret subsatellites were launched as secondary payloads aboard Agena photo reconnaissance satellites.

Ferret Entrepreneur
After student riots in April 1969 at Stanford shut down the Applied Electronics Laboratory, James de Broekert left Stanford. He was a co-founder of three Silicon Valley military intelligence companies: Argo Systems, Signal Science, and Advent Systems,

In 2000 the National Reconnaissance Office recognized James de Broekert as a “pioneer” for his role in the “establishment of the discipline of national space reconnaissance.”

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Follow

Get every new post delivered to your Inbox.

Join 156,337 other followers