As the inventor and co-founder of Startups@NSN, I was one of the drivers behind a successful incubation program within a large (70K) and complex multinational. We coached global employees into generating hundreds of ideas and converted them into 6 prototypes in 2 months. After customer feedback, 4 commercial products were launched in months of which one won a prestigious international innovation award.
Having gone through the whole process, if given the chance to do it again, I would make some substantial changes.
We overestimated a couple of aspects.
1) Employees have very innovative ideas
2) Employees understand customer’s problems
3) Employees can let go of unproductive products
Several employee ideas were very innovative but the majority were just small changes to existing products. Most corporate employees are good at incremental innovations but have a hard time imagining innovative products on top of unknown technology innovations like Cloud Computing (jan 2010) or M2M (2011).
Also using employees as a substitute for understanding customer needs is not a great idea. Nothing beats real customer contact.
Finally people fall in love with their prototypes too easily. They are blinded and can not understand that their brainchild is an ugly duck instead of a beautiful swan.
So how can you do it better?
My first suggestion would be NOT to start with a technology but to start with the customer. Identifying some real important customer problems before identifying solutions is key.
Secondly, using employee but also external ideas (e.g. Via a competition) to generate minimum viable product requirements on paper should be done before building any prototype. The solution definitions should be reviewed by customers to get early feedback. In addition to solutions also other elements should be evaluated, e.g. Price, customer channel, unique value proposition, customer acquisition costs, etc. A good framework to use is the Lean Canvas.
Only after the customers have validate your lean canvas and minimum valuable product design should you go and build a prototype or even better the minimum valuable product. Launching the product in months and only adding features after the initial product has been successful should lower your initial costs and risk of failure.
If you are looking for ways to launch new innovate products quickly, why don’t we talk (Maarten at telruptive dot com)…
With Big Data in the news all day, you would think that having a lot of high quality data is a guarantee for new revenues. However asking yourself how to generate new revenues from existing data is the wrong question. It is a sub-optimal question because it is like having a hammer and assuming everything else is a nail.
A better question to ask is:”What data insight problems potential customers have that I could solve?” Read more…
If you haven’t heart of Arduino or Raspberry Pi, then you need to get up to speed urgently. Arduino is revolutionizing hardware and gadget innovations. It is a do-it-your-self-hardware-kit that allows you to build complex systems by stacking up components like GPRS/3G, NFC, etc. Raspberry Pi is an ARM GNU/Linux box for $25.
However Kickstarter just funded the next generation of both projects:
The Parallella Super Computer (alternative link) for $99. A 64 Cores computer on a small board for an affordable price and very low power consumption. Imagine stacking a 100 parallella’s in a box. There is already a parallel programming competition set-up.
Both projects are open hardware and open source project, hence expect hobbyists to come up with lots of cool ideas…
Data Scientist is going to be the sexiest job of the 21st century. However do we really need a new army of Data Scientists or is there an alternative? There might be and it is called data democracy.
What is data democracy?
Data democracy allows all people to have access to all data insight. In an enterprise, data democracy is about enabling knowledge workers to share insights. To avoid the construction of data silos. To democratize tools that enable each co-worker to become a data scientist without needing a PhD in statistics, mathematics, etc. Visual tools that allow “Excel-users” to use Neural Networks, Support Vector Machines, Random Forests, etc. to make predictions, to classify or cluster data, etc. But without the need to understand the underlying computer learning algorithms into great detail. A sort of corporate RapidMiner that scales.
At the same time we also need better visualization tools. Everybody should be able to create infographics easily. Tools that allow ordinary people to create stunning data visualizations that go beyond the boring reports.
Finally we need better tools to find and share data insights. We need a “Databook”. A Facebook to easily find the data insight you need. A tool that allows you to distribute your predictions about next quarter’s sales and to compare them with the predictions of others.
In summary, we need the data scientists of this world to focus on making access to data insight available to every knowledge worker. Simplify instead of algorithmify! Enable everybody to be a data scientist…
Big Data is a hype right now. Everything that comes close to Hadoop or NOSQL turns into gold! Unfortunately we are getting close to Gartner’s “Peak of Inflated Expectations”. Hadoop does an excellent job at storing many tera bytes of data and doing relatively complex Map-Reduce operations. Unfortunately this is just the tip of the Big Data requirements iceberg. Doing intelligent Big Data analytics requires more than counting who visited a web site. Map Reduce is able to do complex machine learning but it is not really made for it. The Mahout project has to jump through too many hoops to convert matrix-based analytics algorithms into Map-Reduce enabled versions. Map-Reduce just is not an easy way of doing matrix-based operations. Unfortunately most machine learning algorithms rely on matrices. Also real-time and batch often go together in real live. You need to pre-calculate recommendations or train a neural network but you do want recommendations, predictions and classifications to be in real-time. Unfortunately Hadoop is only good at one of the two.
So when the majority of investors and business analysts realize that Hadoop has limitations, what will happen?
Answer: Nothing unexpected. Hadoop will continue to be used for what it is best. A new hype will arrive as soon as somebody solves the real-time distributed analytics problem…
Recently presented on TED, Aurasma is a mobile augmented reality app on your mobile that impresses everybody:
This is the future of mobile. You go to a museum and get all the info about the paintings in a live video put on top of the painting. You could get receipes on how to use a fruit or vegetable that you never prepared before. You get instructions on how to install your WiFi router. A lot of possibilities and most are still to be invented.
In the same week Twilio announced global SMS delivery, WAC was declared a failure.
Was it a surprise? Not really. Developers want simple APIs that are cheap and global. Twilio offers this, WAC does not. Are operators learning anything? The answer is they are not.
Telecom dogma 1: Users will not use a service that is not a global standard.
Internet response: proprietary APIs.
Telecom dogma 2: 99.999% availability with expensive hardware and Oracle RAC is the only way to launch a telecom service.
Internet response: Amazon and Rackspace virtual servers and MySQL.
Telecom dogma: I am the king. I put prices and users have to pay them.
Internet response: $1/virtual number, $0.01 SMS/call per minute.
How can a company with less than 100 employees offer better pricing than the actual network owners?
Operators are thinking ROI in 6 months and then ask what users might like. Internet players launch something simple and cheap, get continuous feedback and improve the service. In 12-36 months they dominate the world.
Know any bad service on the Internet that had a good ROI in 6 months? If you do not provide what users want, ROI will be a lie in your Excel. Forget 99.999%, forget RFPs, forget 40-70% revenue shares, etc. Either you innovate and launch in 3 months with daily improvements afterwards or you will not be an Internet player. The alternative is being a bit pipe. But even there Freedom Pop, Free.fr, Google FttH, etc. might spoil ROI…
In a video posted on Youtube in January 2011, PHD student [now Dr. not surprisingly] Zdenek Kalal shows off his doctor’s thesis: Predator. Predator is a computer vision algorithm that shows how this nacent industry has matured in a few years.
Afterwards the face is automatically recognized. Even when the head is moved sideways.
Computer Vision is one of those domains that has been underutilized by most, except of course for Facebook, Google, etc. However in the age where people are moving from voice to video chat and even continuous live broadcasting, everybody that wants to add extra value towards end-users, or customers/advertisers, should be looking at the possibilities of computer vision. Imagine what is possible if you combine a Kinect or Leap with Predator: online advertisers and secret services ‘ paradise.
The whole video can be found here: