Yesterday I asked a prominent VC a question:
"Why is it that, despite the fact that so many successful startup ideas come from academic research, on the investment side there doesn't seem to be anyone vetting companies on the basis of whether or not what they're doing is consistent with the relevant research and best practices from academia?"
His response was that, unlike with startups in other sectors (e.g. biotech, cleantech, etc.), most tech startups don't come out of academia, but rather are created to fill an unmet need in the marketplace. And that neither he nor many of his colleagues spent much time talking with academics for this reason.
This seems to be the standard thinking across the industry right now. But despite having nothing but respect for this investor, I think the party line here is unequivocally wrong.
Let's start with the notion that most tech startups don't come out of academia. While this may be true if you consider only the one-sentence pitch, once you look at the actual design and implementation choices these startups are making there is typically quite a lot to work with.
For example, there is a startup I recently looked at that works to match mentors with mentees. Though one might not be aware of it, there is actually a wealth of research into best practices:
- What factors should be used when matching mentors with mentees?
- How should the relationship between the mentor and mentee be structured?
- What kind of training, if any, should be given to the participants?
That's not to say that a startup that's doing something outside the research, or even contraindicated by the research, is in any way suspect. But it does raise some questions: Does the startup have a good reason for what they're doing? Are they aware of the relevant research? Is there something they know that we don't?
If the entrepreneurs have good answers to these questions then it's all the more reason to take them seriously. But if they don't then this should raise a few red flags. And it's not only niche startups in wonky areas where this is an issue.
For example, I rarely post to Facebook anymore, but people who follow me can still get a good idea of what I'm up to. Why? Because Facebook leverages the idea of behavioral residue to figure out what I'm doing (and let my friends know) without me having to explicitly post updates. It does this by using both interior behavioral residue, e.g. what I'm reading and clicking on within the site, and exterior behavioral residue, e.g. photos of me taken outside of Facebook.
To understand why leveraging behavioral residue is so important for social networks, consider that of people who visit the typical website only about 10% will make an account. Of those about 10% will make at least one content contribution, and of those about 10% will become core contributors. So if you consider your typical user with a couple hundred friends, this translates into seeing content from only a tiny handful of other people on a regular basis.
In contrast with Facebook, one of the reason why FourSquare has yet to succeed is due to significant problems with their initial design decisions:
- The only content on the site comes from users who manually check into locations and post updates. This means that of my 150 or so friends, I'm only seeing what one or two of them are actually doing, so what's the value?
- The heavy use of extrinsic motivation (e.g. badges) has been shown time and again that extrinsic motivation undermines intrinsic motivation.
The latter especially is a good example of why investing on traction alone is problematic: many startups that leverage extrinsic rewards are able to get a good amount of initial traction, but almost none of them are able to retain users or cross the chasm into the mainstream. Why isn't it anyone's job to know this, even though the research is readily available for any who wants to read it? And why is it so hard to go to any major startup event without seeing VCs showering money on these sorts of startups that are so contraindicated by the research that they have almost no realistic chance of succeeding?
This same critique of investors applies equally to the startups themselves. You probably wouldn't hire an attorney who wasn't willing to familiarize himself with the relevant case law before going to court. So why is it that the vast majority of people hired as community managers and growth marketers have never read Robert Kraut? And the vast majority of people hired to create mobile apps have never heard of Mizuko Ito?
A lot of people associate the word design with fonts, colors, and graphics, but what the word actually means is fate — in the most existential sense of the word. That is, good design literally makes it inevitable that the user will take certain actions and have certain subjective experiences. While good UX and graphic design are essential, they're only valuable to the extent that the person doing them knows how to create an authentic connection with the users and elicit specific emotional and social outcomes. So why are we hiring designers mainly on their Photoshop skills and maybe knowing a few tricks for optimizing conversions on landing pages? What a waste.
Of all the social sciences, the following seem to be disproportionately valuable in terms of creating and evaluating startups:
- Psychology / Social Psychology
- Internet Psychology / Computer Mediated Communication
- Cognitive Development / Early Childhood Education
- Organizational Behavior
- Sociology
- Education Research
- Behavioral Economics
And yet not only is no one hiring for this, but having expertise in these areas likely won't even get you so much as a nominal bonus. I realize that traction and team will always be the two biggest factors in determining which startups get funded, but have we really become so myopic as to place zero value on knowing whether or not a startup is congruent or contraindicated by the last 80+ years of research?
So should you invest in (or work for) the startup that sends text messages to people reminding them to take their medicine? How about the one that lets you hire temp laborers using cell phones? Or the app for club owners that purports to increase the amount of money spent on drinks? In each of these cases there is a wealth of relevant literature that can be used to help figure out whether or not the founders have done their homework and how likely they are to succeed. And it seems like if you don't have someone whose willing to invest a few hours to read the literature then you're playing with a significant handicap.
Investors often wait months before investing in order to let a little more information surface, during which time the valuation can (and often does) increase by literally millions. Given that the cost of doing the extra research for each deal would be nominal in the grand scheme of things, and given the fact that this research can benefit not only the investors but also the portfolio companies themselves, does it really make sense to be so confident that there's nothing of value here?
What makes the web special is that it's not just a technology or a place, but a set of values. That's what we were all originally so excited about. But as startups become more and more prosaic, these values are largely becoming lost. As Howard Rheingold once said, "The 'killer app' of tomorrow won't be software or hardware devices, but the social practices they make possible." You can't step in the same river twice, but I think there's something to be said for startups that make possible truly novel and valuable social practices, and for creating a larger ecosystem that enables them.
You seem to be looking at *what* they're doing, but you should also ask *how* they're doing it.
Every startup that I've seen (this side of Fog Creek) insists on an "open floorplan" office space, an idea which has never been proven to increase productivity, and has been shown by several studies to do just the opposite. Not only do all startups today insist on it, but all the VCs I've met play along -- and most seem to even encourage it.
Another elephant in the room nobody wants to talk about: programming languages. Some are just worse than others (yes, they are!), at programmer efficiency, error handling, scalability, and so on. We've long known that bug count is roughly proportionate to line count, in any language, yet people still try to justify languages which are much more verbose than others. (Except Brainfuck and Whitespace, which are obviously bad. But every other language is equally good, and if it's not, you're just not using it correctly.) And heaven help you if you try to write a compiler! Just because it was in the GoF book and it's well-understood technology from before you were born doesn't mean anybody at your company will want to go near it. No, you have to use Java or PHP, because then we'll be able to hire more easily!
As long as the most fundamental aspects of how a company operates are based on arbitrary rules from management and investors, you're never going to see more specific aspects (like what software they're trying to build this month) being subject to those standards, either.
It's a prerequisite for anything else. It's like a flat-earth research group who bow to their statues of the Greek gods in the office every morning. You complain that their work on alchemy doesn't look terribly promising according to the latest scientific journals, and the CEO waves off your objections with "Zeus told us it would work." Their alchemy research is going to fail, but it's really just a symptom of a bigger problem. And how can a VC (or potential employee) pick a good startup, when *every one* bows to the Greek gods every day? 1 in 20 alchemy research groups will come up with *something* profitable, after enough pivoting, so you still come out ahead in the end.
Posted by: Fundamentals | June 06, 2014 at 05:21 PM
First off, Alex, I love this. You characterized a real problem.
Scientific Validation as a Service? SVaaS.
Investors validating startup business models. Product leaders validating design artifacts. CIOs validating big purchases.
I'd try to bootstrap SVaaS but where's the scientific evidence...
- customers will pay to make substantially better decisions?
- customers will pay more than the cost of information?
- validation can be delivered within the time frame of the decisions,
- customers will actually appreciate the advice and do nice things (buy again, tell their friends) vs. resent being shown up,
- validated decisions are materially better than prevailing wisdom?
There are consulting firms that sell research-driven expertise. The problem's always been that their work only applies to momentous decisions. Contrast this with the gazillion small decisions that shape a web site or the design of a retail store's layout or in-game affordances or rural electrification programs.
There've been numerous attempts at expertise markets, where your rent experts by the minute or the question. Even there, it's hard to extract science-informed answers from a sea of prevailing wisdom, and real science from junk science.
The best part of SVaaS is that it's a gigantic market's really hard problem to solve. A worthy goal.
Posted by: Evanwolf | June 06, 2014 at 07:02 PM
*Sigh*
I used to think that academic research mattered too. Then I spent 8 years in a Computer Science PhD, worked my way to the cutting edge of Human-Computer Interaction, became an expert in the use of Psychology, Sociology, and Behavioral Economics for design problems, and realized that it wasn't very useful.
Why don't you try designing something, and see just how much academic research you find useful? You'll answer your own question.
If I'm wrong, and the academic research actually IS useful, then you (having studied all this research) will be able to out-compete all the other startups, become a millionaire, and have proven me wrong with dollars in your pocket.
Here's the sad state of affairs in academic HCI: rather than invent the future, they study what people in industry have already done. They statistically analyze the design decisions that industrial pioneers have already learned are good.
Posted by: Michael | June 07, 2014 at 12:43 AM
For instance, your example paper on "Behavioral Residue" was published in 2002. But Brad Fitzpatrick didn't need to read that when he designed residue-like features into LiveJournal in 1999.
If you read the proceedings of CHI — the biggest academic conference in Human-Computer Interaction — you'll see more papers studying the existing systems of Facebook and Twitter than you'll see inventing the future.
In computing, academics follow industry, and rarely the other way around. In my 9 years, I can't think of any idea that started in my field and ended up in industry, but can name 100s of results that went the other way around.
Posted by: Michael | June 07, 2014 at 12:49 AM
There is also the issue that the most interesting research findings may be based on irreproducible results: http://www.apa.org/monitor/2013/02/results.aspx
Posted by: Michael Bernstein | June 07, 2014 at 02:03 PM
@Michael Bernstein,
I have no doubt that the vast majority of published research is wrong. But I don't think that's a good excuse for not taking the time to read (and understand) the relevant literature.
Posted by: Alex Krupp | June 07, 2014 at 03:51 PM
My work spans accelerating independent startups (http://JFDI.Asia), trying to bring technology-to-market and the market-to-technology at National University of Singapore (http://bit.ly/NUS_EDL) and advising a government research lab (http://www.i2r.a-star.edu.sg/). Across all those environments I consistently see intelligent people failing to look for 'prior art' in any form, whether in the research literature or out there in the market. My business partner @mengwong is famous for sending a torpedo into business pitches when, after 20 seconds of googling, he finds numerous existing solutions for the problem a business is trying to solve, typically after hearing the business claim that it 'has no competitors'.
The key word there is 'problem' - most of us don't focus on them. As human beings we are hypnotized by the emotion that accompanies the 'Act of Creation' (as Kosetler called it), and we are conditioned from childhood by stories of 'Great Ideas' people from Edison to Jobs. So most of us start with solutions, not problems. Not only do we start out looking down the wrong end of the telescope but we are blinded by powerful but irrational emotions that lead to massive confirmation bias. Perhaps that starts to explain why we don't look into the literature.
As to what we might find if we do, the problem there is not just that (as other commenters have pointed out) research tends to follow industry, but also that the literature contains so much noise. Where is the signal? That's the challenge for statups and investors who want to execute.
The pressure on academics to churn out papers creates a relatively small stream of primary research, from which it is often hard to generalize, and a snowstorm of commentary by intelligent people, who have made their profession to talk rather than to do, which seems largely disconnected from the real world. Elegant theories butter no parsnips.
I am inspired by the way that a new science of startups, based around tools including Lean Innovation methods, is encouraging rigorous formulation of hypotheses about the unknown and systematic testing to get products and services right from the start. We don't expect the certainty of an old-school physical science experiment, but we do expect to create evidence for the patterns that we believe we are seeing. Perhaps social scientists who want to be taken seriously should test their theories in a startup environment, backing them with their own money.
Posted by: Hughmason | June 08, 2014 at 06:20 AM
Academia studies what came before, ideas from previous thinkers.. startups are often trying to create on the bleeding-edge and it's as much art as science. I have an instinctive distrust of academic-types because outside of hard scientific fields, where answers are either 100% right or wrong, their credentials and confidence comes from managing scholastic politics and instructors well, playing the game. They're not really taught to improve things or invent new ideas, they get good grades based on regurgitation and subservience. The kinds of academics who would qualify for startup assessment would probably already developing a startup of their own and be predominantly identified as a startup-person who happened to have gone through academia.
Posted by: drhouse | June 08, 2014 at 06:26 AM