70% of the Top 1000 companies are expected to no longer be around in the next decade. Big companies are not adapting to change. Digital Darwinism does the rest.
What is the reason behind Digital Darwinism?
Why can’t companies adapt to change? The ideal sector to see disruptive innovation at work is the technology sector. Many billions are spend on bringing products to market that fail. Many giants of yesterday are no more. Five smart guys and a dotcom name can make a multi-billion empire tremble.
Often the disrupted are very well managed companies. Companies that have put into place top quality processes. Listened to their customers. They continuously cut costs to offer a compelling quality product. Still along comes a new technology and what looked so great yesterday is called legacy today. Cloud is killing X86 servers, X86 servers killed mainframes, etc.
You can go and read the books about disruptive innovation. However there is a more substantial reason why innovation can kill companies so quickly. In most companies there are three categories of people: the weird, the cost centres and the cool. The weird guys are the techies, the geeks, the nerds, etc. You need them but please don’t let them come out of their cubicle. Every one that is not directly bringing in new revenues goes in the cost centre category, e.g. finance, legal, HR, etc. Some CFOs tried to make the cool group but ended up in jail. The cool gang are all the sales, pre-sales and marketing folks. They do the really hot and difficult stuff. Project managers and solution architects are not doing their job well when projects can not be delivered that have been sold by the cool gang.
If this is the reality in your company then you are likely to have to search for another company in the future. The reason is very simple. If your company does not value technical talent; HR is seen as a cost centre; sales and this quarter are the only things that matters; then there will be nobody to tell top management that the right technical guys are not being hired and that the current solutions are fast becoming legacy.
Disruptive innovations kill old business models. Many sales forces are good at selling established products. Most do a poor job at selling innovative new ideas. Expect every 2 to 10 years to have an innovation that kills your old business model. The technical experts often are the first to see those changes coming. The sales people are the last. The technical expert will tell you Mongo is cool. The salesperson will tell you that Oracle is best bought as an appliance and not through the cloud because of performance reasons. The salesperson can not understand that there are other companies that use Open Source or SaaS to gain marketshare. It looks very bad on your quarterly results if you give your software away or only charge a small bit per month instead of an upfront license.
How can you survive Digital Darwinism?
The main step is to stop organising companies around job functions and to see the value in each job function. Yes you need a sales force that manages the customer relationships and can sell many products. However you don’t need pre-sales, business development and marketing to be part of it. It is much better if you organise the rest of the organisation around product offerings with pre-sales, business development, marketing, finance, operations, delivery, R&D and support all forming part of the same product team. In order to make the best products you need to be able to understand what customers want, how to reach them, how to develop the product, how to price, how to segment and how to support customers. This is the reason early startups are so successful. They don’t have to queue to ask for a project manager to be assigned to their project. Modern organisations are full of queues and buffers. This creates slowness. It is a lot better to make people responsible for a product and combine different people from different groups. As soon as the group reaches 100 people then you have to split. Otherwise they become slow again. But you can split by customer segment, not by job function. Like this it is possible to combine different products that compete against one another in one organisation. Sales will be challenged continuously to learn new things.
Another important point is to hire generalists and people that both understand technology and business. The world moves so fast that any expert will become obsolete in some years. It is better to have generalists that are quick learners.
Failure is the best option for future success. As soon as an organisation realises that they can not win each battle, they substantially increase their chance of winning the war. Failure should be part of all processes.
Finally you need to have the discipline to sell market leading products to others. This is the only way to get overpaid and it guarantees that the rest of the organisation does not fall asleep. People love to become millionaires when their company sells out. Why should only startups have this privilege. Take away the reason why people want to suffer in a 5 people company and you will attract top talent independent of your size.
Cisco came up with the term of Fog Computing and The Wall Street Journal has endorsed it, so I guess Fog Computing will become the next hype.
What is Fog Computing?
Internet of Things will embed connectivity into billions of devices. Common thinking says your IoT device is connected to the cloud and shares data for Big Data analytics. However if your Fitbit starts sending your heartbeat every 5 seconds, your thermometer tells the cloud every minute that it is still 23.4 degrees, your car tells the manufacturer its hourly statistics, farmers measure thousands of acres, hospitals measure remote patients health continuously, etc. then your telecom operator will go bankrupt because their network is not designed for this IoT Data Tsunami.
Fog Computing is about taking decisions as close to the data as possible. Hadoop and other Big Data solutions have started the trend to bring processing close to where the data is and not the other way around. Now Fog Computing is about doing the same on a global scale. You want decisions to be taken as close to where the data is generated and stop it from reaching global networks. Only valuable data should be travelling on global networks. Your Fitbit could sent average heartbeat reports every hour or day and only sent alerts when your heartbeat passed a threshold for some amount of time.
How to implement Fog Computing?
Fog Computing is best done via machine learning models that get trained on a fraction of the data on the Cloud. After a model is considered adequate then the model gets pushed to the devices. Having a Decision Tree or some Fuzzy Logic or even a Deep Belief Network run locally on a device to take a decision is lots cheaper than setting up an infrastructure in the Cloud that needs to deal with raw data from millions of devices. So there are economical advantages to use Fog Computing. What is needed are easy to use solutions to train models and send them to highly optimised and low resource intensive execution engines that can be easily embedded in devices, mobile phones and smart hubs/gateways.
Fog Computing is also useful for Non-IoT
Also network elements should become a lot more intelligent. When was the last time you were on a large event with many people around you. Can you imagine any event in the last 24 months where WiFi was working brilliantly? Most of the time WiFi works in the morning when people are still getting in but soon after it stops working. Fog Computing can be the answer here. You only need to analyse data patterns and take decisions on what takes up lots of data. Chances are that all the mobiles, tablets and laptops that are connected to the event WiFi have Dropbox or some other large file sharing enabled. You take some pictures of things on the event and since you are on WiFi the network gets saturated by a photo sharing service that is not really critical for the event. Fog Computing would detect this type of bandwidth abuse and would limit it or even block it. At the moment this has to be done manually but computers would do a lot better job at it. So Software Defined Networking should be all over Fog Computing.
Telecom Operators and Equipment Manufacturers Should Embrace Fog Computing
Telecom operators should heavily invest in Fog Computing by making Open Source standards that can be easily embedded in any device and managed from any cloud. When I say standards, I don’t mean ETSI. I mean organise a global Fog Computing competition with a $10 million award for the best open source Fog Computing solution. Make a foundation around it with a very open license, e.g. Apache License. Invite and if necessary oblige all telecom and general network suppliers to embed it.
The alternatives are…
Not solving this problem will provoke heavy investment in global networks that carry 90% junk data and an IoT Data Tsunami. Solving this problem via network traffic shaping is a dangerous play in which privacy and net neutrality will come up earlier than later. You can not block Dropbox, YouTube or Netflix traffic globally. It is a lot easier if everybody blocks what is not needed or at least minimises such traffic themselves. Most people have no idea how to do it. Creating easy to use open source tools would be a first good step…
An online bookstore did not only redefine retail, content distribution and gave the postal services a second chance, it also is becoming the world’s data centre. The best way, to find out if the hot school girl is open for a new relationship, is now showing IT companies how to build servers & routers and telecom giants how people like to communicate. An online search and advertisement company has revolutionised how you find anything from text, images, location, etc. It redefined mobile computing together with a fruit-like branded company. It has global networks that even the biggest telecom incumbents can only dream off. It has cars that drive alone. Body accessories that puts science fiction authors next to historians.
At the same time stamps, travel agents, maps, telephone books, book publishers, bill boards, broadcasters, movie theatres, journalists, photo film, media storage, video cameras, taxi services, estate agents, high street shops, etc. have changed and not always for better.
If you work for a “traditional” company are you sure that in five years your company still is in business or can it be that some unknown small company launched a product that makes your company’s best products look like they belong in the history museum? Remember Nokia phones!!! Five years ago they had record sales…
If software disruptors have so much power, why aren’t companies hiring chief disruption officers. Senior executives whose goal it is to setup disruptive new product families that are owned by traditional players but are allowed to question any industry rules and launch cannibalising offerings often as independent companies.
It is a lot better that a big bank owns a bit coin exchange, a peer to peer lender, a crowd funded venture capitalist, a mobile payment provider, a micro payment cloud broker, a mobile app currency exchange, a machine learning financial adviser, etc. then being put out of business by any disruptive challenger.
Of course you can always copy the telecom model. Have everybody in your company look for potential cost reductions in the form of virtualized networks, squeezing (and killing) suppliers, etc. while your (mobile) broadband network is 12-36 months away from a data tsunami in the form of 4k streaming video, free mobile video calls, fitbits telling the cloud every minute (or second) your average heart beat and twenty other vital signs, free frequency crowd sourced mobile networks, etc. At a time where your business model has not seen a margin improvement in 10 years, your costs are exploding and your revenue will melt faster than ice in the Sahara.
Why don’t you think about hiring a chief disruption officer before you need to hire a chief miracle officer…
Why is it that a 5 people startup can bring an industry on its knees? There are many answers but open source and horizontal scaling are good answers. Traditionally companies have made solutions that were proprietary and optimised for deployments on a small number of expensive servers. It toke traditional IT departments quite some time to integrate those solutions and they would not touch them for multiple years. The result is that software companies would add a long list of features because customers wanted to be sure the future would be assured. These solutions would be “featureware”. The market leader would have a long list of features and could solve any problem given enough time and money. The more the better.
There is no better example than the telecom industry. Telecom solutions are overloaded with features, hard to use & integrate and as a result very expensive.
If you see this pattern as a disruptor or challenger then you should be extremely pleased. It means that brains can beat the dinosaurs.
Make your solution open source and make it horizontally scalable. Why? Traditional software vendors optimised for specific expensive hardware. Their thinking is that to grow you need a bigger box. Their licences are expensive per socket so customers would be buying the biggest and most expensive servers possible.
If your solution however installs on any public or private cloud, scales horizontally and it is open source then customers that want to save costs (almost everybody in almost all industries!!!), will have their R&D departments try your solution. The temptation is just too big. Make your software easy to use by using the latest web technologies and by focusing on more is less, and you will be a winner.
Let me give you an example. Metaswitch is by all means a traditional telecom solution provider that has been playing according to the rules. One day however they decided that they wanted to be different. They made a open source ims solution (something all telecoms use to handle calls) and used the latest dotcom solutions like memcached and Cassandra. The result is that any telecom R&D department is now testing Clearwater. Via working with Canonical and their award winning open source product Juju, Clearwater will be able to deployed, integrated and scaled in minutes everywhere. So what traditional vendors do in 12 months for many millions you can now do for free in minutes. However nobody will put their solution in production hence customers will pay for a commercially supported version.
Does this only apply to telecom? No! In industrial domains, banking, retail, media, etc. there are many similar potential examples that are coming. Brains will win from Dinosaurs. So if you are willing to be a challenger and convert a billion dollar market into many millions but flowing to only one company, it has never been a better time to become a blue ocean strategist…
This week a new Juju Lab was launched: Instant Single Sign-on and 2-Factor Authentication. The Juju Lab is a new direction for Juju Innovation in which a community of contributors builds a revolutionizing solution for a common problem. This time the problem is how to make the world more secure instantly. Juju Labs works like Kickstarter. Either goals are met and the project becomes a full Juju solution or the project dies.
Future Juju Labs are being considered. Everything from enterprise Java auto-tuning, instantly scaling PHP, instant legacy integration, instant BI, etc. As long as it solves a common problem in an exponentially better way, creating a Juju Lab is an option.
The main problem is how can you quickly evaluate which common problem to tackle first. Any ideas are welcome…
After years of virtually no innovation from telecom operators, 2014 will be different. Not because telecom dinosaurs have all of a sudden become lean mean innovation machines. Quite the contrary. Most operators are still focusing on rolling out THIS YEAR’s (instead of today’s) “innovative” service which will be just a copycat of some famous dotcom.
So why the excitement?
2014 will be the pivot year. The year that will be marked in history books as the year old school lost and innovators won.
The first Ryanair-like disruptive telecoms will leave their borders and start bankrupting “traditional telecoms”. Cross-platform voice/video 4G apps will reach the tipping point. Cloud Telco PaaS will be reality. Individual communication solutions or iCommunication will be a reality. Web 3.0 will include voice & video communication. NFV will be driven by non-telecom players. WAN SDN will be deployed by more than only Google, Amazon, etc. Cloud Media Streaming will reach the tipping point. Internet of things will meet Cloud will meet Big Data will meet Mobile will meet disruptive communication solutions. Early adopters paradise…
2014 will be an exciting year for those that love telecom innovation!!! Bit pipe nightmares becoming reality for others.
2014 will be the year in which telecom will be split into two. The ones that understand iCommunication and the ones that don’t. iCommunication is about giving a personalized communication experience to consumers and enterprises. Low cost subscription models and freemium will be the main business models. Low-cost pay per use is still possible but not for messaging or voice traffic. The value proposition needs to be higher.
What will this mean?
Bit pipes will become a reality in Europe and possible in the US (mainly dependent on what Google and others do). Telecom operators massive head count reductions. Nokia & Blackberry will be joined by other one time big telco names. The end of the world for some. Especially for those that belief telecom is a dividend generator or a bottomless pit for license taxation…
For consumers and enterprises there will be a new world of communication possibilities. Communication will be fully integrated into back office systems, e.g. CRMs like Salesforce store all calls. Improvements in voice recognition will make talking to machines a natural interface. Managing contacts will become a breeze. Forget memorizing phone numbers…
Communication as a Service will be the big innovation. The Cloud, Big Data, IoT will meet IP communication. Whatsapp will have a bigger brother for voice and video. Unless Google and Apple surprise the market with joint IP-based communication over LTE and WiFi. Asia, Africa and Latam will have two more years but most of their operators will make the same mistakes as the European ones.
Bit pipes are not even a safe business because the Ryanair of telecom will be able to quickly pickup mobile licenses and networks of the third/forth player, the one that goes bankrupt.
Things will not look nice for the next three years for some but we all knew that it was going to come for the last 10-15 years. Any CxO that calls this an unforeseen disruptive technology should be fired on the spot. The next edition of the Innovators Dilemma does not have to go back to the last century for examples. This is a textbook case for MBA students for years to come…
If you read sites like highscalability.com you will have certainly read about those big name dotcoms that deploy new features to production up to tens of times a day. For most startups bringing features to production is still a manual, at best semi-manual process. You have the odd start-up that has it all automated, but unfortunately this is often a signal that they have too much time on their hands which points towards more critical problems.
What if startups would not have to worry about how to set-up hourly feature deployment? What if they could get an open source solution that delivers them flexible and highly scalable continuous deployment in minutes?
What if Startups could launch new features faster than the top DotComs and scale almost as good?
If this sounds attractive to you or you know a start-up to whom it would be, then you should visit this blog post. Ubuntu has launched a beta program and if enough startups sign up, then they will build an instant and scalable open source continuous deployment solution for them.
An MIT student recently created a new type of massively distributed database, one that runs on graphical processors instead of CPUs. Mapd, as it has been called, makes use of the immense computational power available in off-the-shelf graphics cards that can be found in any laptop or PC. Mapd is especially suitable for real-time quering, data analysis, machine learning and data visualization. Mapd is probably only one of many databases that will try new hardware configurations to cater for specific application use cases.
Alternative approaches could focus on large sets of cheap mobile processors, Parallella processors, Raspberry PIs, etc. all stitched together. The idea would be to create massive processing clouds based on cheap specialized hardware that could beat traditional CPU Clouds both in price and performance at least for some specific use cases…