My Photo
Get new posts via email:
AddThis Social Bookmark Button

Creative Commons


  • Creative Commons License
Blog powered by Typepad

« The one statistic you never hear about drugs | Main | Django for Startup Founders: A better software architecture for SaaS startups and consumer apps »

Comments

You seem to be looking at *what* they're doing, but you should also ask *how* they're doing it.

Every startup that I've seen (this side of Fog Creek) insists on an "open floorplan" office space, an idea which has never been proven to increase productivity, and has been shown by several studies to do just the opposite. Not only do all startups today insist on it, but all the VCs I've met play along -- and most seem to even encourage it.

Another elephant in the room nobody wants to talk about: programming languages. Some are just worse than others (yes, they are!), at programmer efficiency, error handling, scalability, and so on. We've long known that bug count is roughly proportionate to line count, in any language, yet people still try to justify languages which are much more verbose than others. (Except Brainfuck and Whitespace, which are obviously bad. But every other language is equally good, and if it's not, you're just not using it correctly.) And heaven help you if you try to write a compiler! Just because it was in the GoF book and it's well-understood technology from before you were born doesn't mean anybody at your company will want to go near it. No, you have to use Java or PHP, because then we'll be able to hire more easily!

As long as the most fundamental aspects of how a company operates are based on arbitrary rules from management and investors, you're never going to see more specific aspects (like what software they're trying to build this month) being subject to those standards, either.

It's a prerequisite for anything else. It's like a flat-earth research group who bow to their statues of the Greek gods in the office every morning. You complain that their work on alchemy doesn't look terribly promising according to the latest scientific journals, and the CEO waves off your objections with "Zeus told us it would work." Their alchemy research is going to fail, but it's really just a symptom of a bigger problem. And how can a VC (or potential employee) pick a good startup, when *every one* bows to the Greek gods every day? 1 in 20 alchemy research groups will come up with *something* profitable, after enough pivoting, so you still come out ahead in the end.

First off, Alex, I love this. You characterized a real problem.

Scientific Validation as a Service? SVaaS.

Investors validating startup business models. Product leaders validating design artifacts. CIOs validating big purchases.

I'd try to bootstrap SVaaS but where's the scientific evidence...

- customers will pay to make substantially better decisions?
- customers will pay more than the cost of information?
- validation can be delivered within the time frame of the decisions,
- customers will actually appreciate the advice and do nice things (buy again, tell their friends) vs. resent being shown up,
- validated decisions are materially better than prevailing wisdom?

There are consulting firms that sell research-driven expertise. The problem's always been that their work only applies to momentous decisions. Contrast this with the gazillion small decisions that shape a web site or the design of a retail store's layout or in-game affordances or rural electrification programs.

There've been numerous attempts at expertise markets, where your rent experts by the minute or the question. Even there, it's hard to extract science-informed answers from a sea of prevailing wisdom, and real science from junk science.

The best part of SVaaS is that it's a gigantic market's really hard problem to solve. A worthy goal.

*Sigh*

I used to think that academic research mattered too. Then I spent 8 years in a Computer Science PhD, worked my way to the cutting edge of Human-Computer Interaction, became an expert in the use of Psychology, Sociology, and Behavioral Economics for design problems, and realized that it wasn't very useful.

Why don't you try designing something, and see just how much academic research you find useful? You'll answer your own question.

If I'm wrong, and the academic research actually IS useful, then you (having studied all this research) will be able to out-compete all the other startups, become a millionaire, and have proven me wrong with dollars in your pocket.

Here's the sad state of affairs in academic HCI: rather than invent the future, they study what people in industry have already done. They statistically analyze the design decisions that industrial pioneers have already learned are good.

For instance, your example paper on "Behavioral Residue" was published in 2002. But Brad Fitzpatrick didn't need to read that when he designed residue-like features into LiveJournal in 1999.

If you read the proceedings of CHI — the biggest academic conference in Human-Computer Interaction — you'll see more papers studying the existing systems of Facebook and Twitter than you'll see inventing the future.

In computing, academics follow industry, and rarely the other way around. In my 9 years, I can't think of any idea that started in my field and ended up in industry, but can name 100s of results that went the other way around.

There is also the issue that the most interesting research findings may be based on irreproducible results: http://www.apa.org/monitor/2013/02/results.aspx

@Michael Bernstein,

I have no doubt that the vast majority of published research is wrong. But I don't think that's a good excuse for not taking the time to read (and understand) the relevant literature.

My work spans accelerating independent startups (http://JFDI.Asia), trying to bring technology-to-market and the market-to-technology at National University of Singapore (http://bit.ly/NUS_EDL) and advising a government research lab (http://www.i2r.a-star.edu.sg/). Across all those environments I consistently see intelligent people failing to look for 'prior art' in any form, whether in the research literature or out there in the market. My business partner @mengwong is famous for sending a torpedo into business pitches when, after 20 seconds of googling, he finds numerous existing solutions for the problem a business is trying to solve, typically after hearing the business claim that it 'has no competitors'.

The key word there is 'problem' - most of us don't focus on them. As human beings we are hypnotized by the emotion that accompanies the 'Act of Creation' (as Kosetler called it), and we are conditioned from childhood by stories of 'Great Ideas' people from Edison to Jobs. So most of us start with solutions, not problems. Not only do we start out looking down the wrong end of the telescope but we are blinded by powerful but irrational emotions that lead to massive confirmation bias. Perhaps that starts to explain why we don't look into the literature.

As to what we might find if we do, the problem there is not just that (as other commenters have pointed out) research tends to follow industry, but also that the literature contains so much noise. Where is the signal? That's the challenge for statups and investors who want to execute.

The pressure on academics to churn out papers creates a relatively small stream of primary research, from which it is often hard to generalize, and a snowstorm of commentary by intelligent people, who have made their profession to talk rather than to do, which seems largely disconnected from the real world. Elegant theories butter no parsnips.

I am inspired by the way that a new science of startups, based around tools including Lean Innovation methods, is encouraging rigorous formulation of hypotheses about the unknown and systematic testing to get products and services right from the start. We don't expect the certainty of an old-school physical science experiment, but we do expect to create evidence for the patterns that we believe we are seeing. Perhaps social scientists who want to be taken seriously should test their theories in a startup environment, backing them with their own money.

Academia studies what came before, ideas from previous thinkers.. startups are often trying to create on the bleeding-edge and it's as much art as science. I have an instinctive distrust of academic-types because outside of hard scientific fields, where answers are either 100% right or wrong, their credentials and confidence comes from managing scholastic politics and instructors well, playing the game. They're not really taught to improve things or invent new ideas, they get good grades based on regurgitation and subservience. The kinds of academics who would qualify for startup assessment would probably already developing a startup of their own and be predominantly identified as a startup-person who happened to have gone through academia.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name is required. Email address will not be displayed with the comment.)