Amazon is taking another step at disrupting an existing market. This time they have their sight set on the Datawarehouse market. Amazon is currently running a limited preview of a new service called Redshift. Redshift promised a Datawarehouse starting from $1000/Terrabyte/Year. To get to this price point you have to go for the XL reserved instance which comes with a minimum of 2TB, so you actually pay $2000. If you want to pay per use then you pay $0.85/hour which comes to $7500 for 2TB per year. You can also scale up to a hundred of 8XL instances which will give you 1.6 petabyte of compressed data. Amazon will do the management (software patching, scaling, restarting failed instances, etc.) as well as backups for you. The initial partners are Jaspersoft and Microstrategy but more solution providers are being promised. You can connect to your datawarehouse via PostgreSQL JDBC or ODBC. The limited service is only available in the US EAST region but looking at the historic performance of Amazon this should change quickly.
As always Amazon is one step ahead of the competition and is able to offer Datawarehouse (DW) solutions to companies that were traditionally too small to pay the total cost of ownership associated with an on-site datawarehouse deployment. However as with any disruptive innovation, if Amazon is able to extend their offering to also include all the tools business analysts and data scientists need, then over time Redshift could be disrupting even the high-end DW market. For sure, to be continued…
In this post I want to show a technique that is an alternative for creating a business case: “Lean Canvas”. Lean Canvas has been proposed by the book: “Running Lean“, that itself is based on “Lean Startup“.
The idea of Lean Canvas is to put what would go into the executive summary of a business case on one page and to forget about writing the rest of the business case. The justification is that writing a business case takes 2 to 3 months and CxOs normally only read the executive summary. So instead of spending 2 to 3 months, you spend hours or days and get it in front of customers to get feedback. With the feedback you can then refine your idea and create a Minimum Valuable Product in the same 2 to 3 months. So instead of having a nice paper report nobody reads, you can start earning money.
The Lean Canvas contains the major customer problems you want to solve. These customer problems need to be important [A painkiller, not a vitamin], shared by many and not have an easy workaround. Customers need to validate them before you start thinking about solutions. Customers are the ideal party to tell you about their problems but not necessarily the best to give you ideas about a solution. Think about Henry Ford’s words: “If I had asked what people wanted they would have said faster horses…”. Most startups focus excessively on the solution and forget that they need to validate a lot more things. After the problem, the second most important part of the Lean Canvas is the customer segment and channel. Who do you want to offer a product to and how to do reach them. Also the unique value proposition is key. The other elements of the Lean Canvas are the unfair advantage [how can I avoid others to just copy my business?], key metrics [how can I measure success?] and last but not least the cost structure [what does it cost to acquire a customer, build a minimum valuable product, etc.?] and revenue streams [how much am I going to charge and what other revenue sources are there]. You can create a Lean Canvas on paper or use a SaaS-version.
So far the theory, now let’s review an example…
The customer problems:
Door keys are a nuisance. You can lose them. You have to give copies to family and friends if you want them to go to your house if you are not there. Do you really want to give the cleaning lady or man a copy? Is my lock safe from burglars?
The mailman or delivery guy comes to my home but often packages do not fit my mailbox.
When people ring my bell, they know when I am not home. That is unsafe.
So what is the solution?
My proposed solution is the iDoor. The iDoor is an intelligent door which you control remotely to decide who accesses, who delivers and who is shut-out. Via a camara and full-duplex audio system, you are able to see who is standing in front of your door and communicate with them. Your smartphone will be your remote door manager. Advanced models could have face recognition and share data with other intelligent doors in the neighbourhood, hence if you are sleeping a siesta and those annoying door to door vendors approach your door they will automatically hear a message to go away and your bell will not function. If a burglar is detected, then the police can be warned. If the postman has a big package then remotely you can open a compartment so they can store the package. If your family comes they can go into the house without problems. Your cleaning lady can as well, as long as it is her normal working hours and she comes alone.
Unique value proposition?
As if you were home 24×7. Busy people will never miss an Amazon package again. Burglars will not know if you are in the garden or not home at all.
Mid-high class house owners.
An existing door manufacturer that targets upper markets should be partnered with. An example could be Hörnmann.
Door sales and door usage.
A complete costing has to be done. TBD.
Door sales and door installation/maintenance services are the primary revenue stream. However door apps and selling anonymous aggregated data could be additional sources.
You can find a quick summary in the following slides as well as some details about the technology components. This example needs customer validation and several areas need quite some more work [e.g. cost, revenue, unfair advantage, etc.]. However I hope the idea is clear.
Maarten Ectors is a senior executive specialised in value innovation: creating new products and generating new revenues based on cutting-edge technologies like Big Data, Cloud, etc. He is currently looking for new challenges. You can contact him at: maarten at telruptive dot com.
I was expecting the announcement a lot sooner. I made some slides about a similar concept some months ago (I called it the iCar) and presented them to one of the largest car parts manufacturer. Unfortunately car manufacturers have been very slow in adopting new innovations. At least one car manufacturer has entered the 21st century. Ford has created OpenXC, an Open Source hardware and software solution to interact with your car. OpenCX = Arduino + Android + Car Interface. Developers will be able to use their Android to read information from the car. You can read the angle of the steering wheel, vehicle speed, location, accelerator pedal position, brake pedal position, engine speed, odo meter (distance travelled), fuel consumed, fuel level, head lamp status, high beam status, ignition status, parking brake status, transmission gear position, turn signal status, etc.
At the moment you are not able to interact with your car unfortunately. It would be good if OpenCX could offer real interaction. Think about the possibilities of:
1) Parental control apps – my teenage child will not be able to drive more than 120km on the highway and 50km in the city center and I can tell them not to go to certain neighbourhoods.
2) Personalization – my car adapts to me. If I am alone in the car the car radio blasts out hits from the 90s, the motor goes into sportive, inside temperature goes to 21, etc. If my family is present, children music, comfort driving, temperature 22.5, etc.
3) Predictive Maintenance – my car tells me that there is a problem, finds the garage that has the spare parts in stock and schedules an appointment based on my calendar’s availability.
These are just one of many ideas. The main thing is that entertainment, personalization and third-party services will get an enormous boost if open hardware, open software and creativity are allowed to enter your car…
Maarten is currently looking for new challenges as a senior executive, expert in value innovation and using cutting edge technologies to generate new revenues. Contact him at maarten at telruptive dot com.
On Quora there was a question about how much CAPEX and OPEX you can save by moving to the Cloud. My short answer is: you might not save any.
My longer answer:
If you compare owning your own data center to owning a car, than hosting is like renting and the Cloud is a taxi. If you have a lot of hardware and software that has been written off or highly utilized in your existing data center then moving your solutions to the Cloud might well increase your monthly bill. Just like a travelling salesman will see a higher transport cost when switching from a car to the use of taxis. In this case virtualization is the best solution.
So why is everybody talking about the Cloud then?
The Cloud is great for three scenarios:
1) If you are starting something new
2) If you have unpredictable load
3) Pay per use services
Starting something new
Startups benefit most from the Cloud since they have to find a sustainable revenue stream before they run out of cash. Time is money. Not having to invest upfront in hardware and growing your hardware together with your needs is very attractive to them.
Also any other type of innovation or unproven business within existing companies should be using the Cloud for the exact same reason.
If you are lucky to be in a situation where your load grows extremely fast and grows together with your revenues, then the Cloud is ideal as well.
Also the case where you have this one day a year where your load is a 100-times larger than the second top day. Or if your load is unevenly spread during some hours of the day and falls to almost nothing during the rest of the day. All these spikes could be moved to the Cloud via a hybrid solution.
Pay per use
Instead of focusing on all that software and hardware that is fully utilized in your data center, you should focus on the software and hardware that is not. Those promising projects that went nowhere. The software that only needs to be used once a month or was hardly ever used.
Software-as-a-Service (SaaS) is the main cost saver for using the Cloud. Substitute infrequently used software by SaaS solutions and pay only for usage. No upfront investment in hardware, licenses, set-up, etc. Pay only for what you use. If you start using this type of software heavily then you can always do a business case to bring it back to your data center. There are thousands of examples ranging from general solutions like CRM, ERP, recruiting, project management, etc. to specialized industry specific SaaS. Look at SaaS marketplaces to understand the full offering.
Convert your CAPEX into Revenues
The last advise is to think about your current solutions. In case you have built a custom solution for some industry problem, then converting it into a SaaS offering for others might be the best way to save you from future CAPEX approval problems. The reason is that when a solution is converted from a cost item into a revenue generator, management all of a sudden will start looking at it with a totally new perspective…
Maarten Ectors is a senior executive whose is an expert in generating new revenues from new technologies like the Cloud, Big Data, Machine Learning, Mobile, etc. He is currently looking for new challenges. You can contact him at maarten at telruptive dot com.
If you just invested a lot of money in a Big Data solution from any of the traditional BI vendors (Teradata, IBM, Oracle, SAS, EMC, HP, etc.) then you are likely to see a sub-optimal ROI in 2013.
Several innovations will come in 2013 that will change the value of Big Data exponentially. Other technology innovations are just waiting for smart start-ups to put them into good use.
The first major innovation will be Google’s Dremel-like solutions coming of age like Impala, Drill, etc. They will allow real-time queries on Big Data and be open source. So you will get a superior offering compared to what is currently available for free.
Cloud-Based Big Data Solutions
The absolute market leader is Amazon with EMR. Elastic Map Reduce is not so much about being able to run a Map Reduce operation in the Cloud but about paying for what you use and not more. The traditional BI vendors are still getting their head around a usage-based licensing for the Cloud. Except a lot of smart startups to come up with really innovative Big Data and Cloud solutions.
Big Data Appliances
You can buy some really expensive Big Data Appliances but also here disruptive players are likely to change the market. GPUs are relatively cheap. Stack them into servers and use something like Virtual OpenCL to make your own GPU virtualization cluster solution. These type of home-made GPU clusters are already being used for security Big Data related work.
Finally Parallella will put a 16-core supercomputer into everybody’s hands for $99. Their 2013 supercomputer challenge is definitely something to keep your eyes on. Their roadmap talks about 64 and 1000 core versions. If Adapteva can keep their promises and flood the market with Parallella’s then expect Parallella Clusters to be 2013 Big Data Appliance.
Distributed Machine Learning
Mahout is a cool project but Map Reduce might not be the best possible architecture to run iterative distributed backpropagation or any other machine learning algorithms. Jubatus looks promising. Also algorithm innovations like HogWild could really change the dynamics for efficient distributed machine learning. This space is definitely ready for more ground-breaking innovations in 2013.
Easier Big Data Tools
This is still a big white spot in the Open Source field. Having Open Source and easy to use drag-and-drop tools for Big Data Analytics would really excel the adoption. We already have some good commercial examples (Radoop = RapidMiner + Mahout, Tableau, Datameer, etc.) but we are missing good Open Source tools.
I am currently looking for new challenges so if you are active in the Big Data space and are looking for a knowledgable senior executive be sure to contact me at maarten at telruptive dot com.
As the inventor and co-founder of Startups@NSN, I was one of the drivers behind a successful incubation program within a large (70K) and complex multinational. We coached global employees into generating hundreds of ideas and converted them into 6 prototypes in 2 months. After customer feedback, 4 commercial products were launched in months of which one won a prestigious international innovation award.
Having gone through the whole process, if given the chance to do it again, I would make some substantial changes.
We overestimated a couple of aspects.
1) Employees have very innovative ideas
2) Employees understand customer’s problems
3) Employees can let go of unproductive products
Several employee ideas were very innovative but the majority were just small changes to existing products. Most corporate employees are good at incremental innovations but have a hard time imagining innovative products on top of unknown technology innovations like Cloud Computing (jan 2010) or M2M (2011).
Also using employees as a substitute for understanding customer needs is not a great idea. Nothing beats real customer contact.
Finally people fall in love with their prototypes too easily. They are blinded and can not understand that their brainchild is an ugly duck instead of a beautiful swan.
So how can you do it better?
My first suggestion would be NOT to start with a technology but to start with the customer. Identifying some real important customer problems before identifying solutions is key.
Secondly, using employee but also external ideas (e.g. Via a competition) to generate minimum viable product requirements on paper should be done before building any prototype. The solution definitions should be reviewed by customers to get early feedback. In addition to solutions also other elements should be evaluated, e.g. Price, customer channel, unique value proposition, customer acquisition costs, etc. A good framework to use is the Lean Canvas.
Only after the customers have validate your lean canvas and minimum valuable product design should you go and build a prototype or even better the minimum valuable product. Launching the product in months and only adding features after the initial product has been successful should lower your initial costs and risk of failure.
If you are looking for ways to launch new innovate products quickly, why don’t we talk (Maarten at telruptive dot com)…
With Big Data in the news all day, you would think that having a lot of high quality data is a guarantee for new revenues. However asking yourself how to generate new revenues from existing data is the wrong question. It is a sub-optimal question because it is like having a hammer and assuming everything else is a nail.
A better question to ask is:”What data insight problems potential customers have that I could solve?” Read more…
If you haven’t heart of Arduino or Raspberry Pi, then you need to get up to speed urgently. Arduino is revolutionizing hardware and gadget innovations. It is a do-it-your-self-hardware-kit that allows you to build complex systems by stacking up components like GPRS/3G, NFC, etc. Raspberry Pi is an ARM GNU/Linux box for $25.
However Kickstarter just funded the next generation of both projects:
The Parallella Super Computer (alternative link) for $99. A 64 Cores computer on a small board for an affordable price and very low power consumption. Imagine stacking a 100 parallella’s in a box. There is already a parallel programming competition set-up.
Both projects are open hardware and open source project, hence expect hobbyists to come up with lots of cool ideas…
Data Scientist is going to be the sexiest job of the 21st century. However do we really need a new army of Data Scientists or is there an alternative? There might be and it is called data democracy.
What is data democracy?
Data democracy allows all people to have access to all data insight. In an enterprise, data democracy is about enabling knowledge workers to share insights. To avoid the construction of data silos. To democratize tools that enable each co-worker to become a data scientist without needing a PhD in statistics, mathematics, etc. Visual tools that allow “Excel-users” to use Neural Networks, Support Vector Machines, Random Forests, etc. to make predictions, to classify or cluster data, etc. But without the need to understand the underlying computer learning algorithms into great detail. A sort of corporate RapidMiner that scales.
At the same time we also need better visualization tools. Everybody should be able to create infographics easily. Tools that allow ordinary people to create stunning data visualizations that go beyond the boring reports.
Finally we need better tools to find and share data insights. We need a “Databook”. A Facebook to easily find the data insight you need. A tool that allows you to distribute your predictions about next quarter’s sales and to compare them with the predictions of others.
In summary, we need the data scientists of this world to focus on making access to data insight available to every knowledge worker. Simplify instead of algorithmify! Enable everybody to be a data scientist…
Big Data is a hype right now. Everything that comes close to Hadoop or NOSQL turns into gold! Unfortunately we are getting close to Gartner’s “Peak of Inflated Expectations”. Hadoop does an excellent job at storing many tera bytes of data and doing relatively complex Map-Reduce operations. Unfortunately this is just the tip of the Big Data requirements iceberg. Doing intelligent Big Data analytics requires more than counting who visited a web site. Map Reduce is able to do complex machine learning but it is not really made for it. The Mahout project has to jump through too many hoops to convert matrix-based analytics algorithms into Map-Reduce enabled versions. Map-Reduce just is not an easy way of doing matrix-based operations. Unfortunately most machine learning algorithms rely on matrices. Also real-time and batch often go together in real live. You need to pre-calculate recommendations or train a neural network but you do want recommendations, predictions and classifications to be in real-time. Unfortunately Hadoop is only good at one of the two.
So when the majority of investors and business analysts realize that Hadoop has limitations, what will happen?
Answer: Nothing unexpected. Hadoop will continue to be used for what it is best. A new hype will arrive as soon as somebody solves the real-time distributed analytics problem…