Eliminating RFPs to make enterprise software sexy

November 28, 2014 Leave a comment

Today I had a meeting that could be the beginning of the end of RFPs to buy software. RFPs are the tool established buyers and vendors use to keep new entrants at bay. However I haven’t met anybody that says they love writing or responding to them. The effect of RFPs on software is perverse. The main problem is that you can’t ask if your software is beautiful, easy to use, fast to integrate, efficient, effective at solving a business problem, secure, etc. Instead you ask if you provide training, because you assume it is ugly and difficult. You ask if they offer consultancy services and an SDK or connector library because you assume it is difficult. You assume you need to customise it for months because it will not be effective out of the box. But most importantly since you will be stuck with the software for years, you ask if it supports any potential feature that perhaps in 5 years might be needed for 5 minutes. It is this last set of questions that kill any innovation and ease of use in business software. A product manager in the receiving end will get funding to add those absurd features when customers ask for them. A career limiting move would be to ask for budget to reduce useless features or tell that your product looks worse than Frankenstein.

So how can you make sure that software is beautiful, does what it supposed to efficiently and effectively, is fast, nimble, easy to use, secure, scalable, fast to integrate, is future proof, etc.? You do what you do when you buy a car, you go and ask the keys of different models and take them for a serious spin and put them to their limits.

So what you propose is a three months PoC for each potential solution?
No what I propose is being able to get your hands on all different alternative software solutions and deploying, integrating and scaling them in hours or even minutes and then release a bunch of automatic performance tests and rough end-users, even some ethical hackers or competitors.

If the software does what it says on the tin, is effective, efficient, beautiful, secure, fast, scalable, easy, etc. then you negotiate pricing or use it for a minimum valuable product.

It used to be impossible to do all of this in hours but with solutions to deploy quickly private clouds and cloud orchestration solutions like Juju, we are actually planning on trying this approach with a real customer and real suppliers. To be continued…

Internet of Things Challenges and Opportunities

November 21, 2014 1 comment

IoT is one of the biggest potential new revenue streams but also one of the most challenging technical problems we have today.

the technical challenges

IoT is not only sensors + Big Data analytics + cloud + short-range low-energy networking and Internet. The real problem is that you have to be good at many different technologies that used to be separate and that one mistake can have disastrous effects. You have to be good at miniature sensors that need to be able to run two years on one tiny battery and use software that even the biggest geek hates to work on. At making sure IPv6 networking is adjusted to this small footprint devices with innovations like CoAP and 6LoWPAN. To learn about the world of micro-controllers open source hardware like Arduino, micro-computing platforms like Raspberry Pi and Edison, ARM Cortex, Intel Quark, etc. You also need to know about new and old low-energy networking technologies like Zigbee, Bluetooth Low-Energy, etc. Afterwards you want your sensors to be connected to a hub because otherwise you would need a SIM or Wifi in each sensor which would drain battery. So you need to make a smart hub that ideally can run apps from different developers and can support lots of new devices. However you also want devices to support peer-to-peer technologies like Thread or new standards from Intel, Qualcomm or any of the numerous standardisation bodies. You want to use 3D printing to print an attractive casing. You want to use crowd-funding to sell your Smarthub. You want mobile apps to work flawlessly with IoT. You need to know about Powerline, gesture control, in-building location tracking, voice control, etc. if you want to compete with the best smart hubs. You now need to know GPRS, 3G, 4G, White Spaces, Long-Haul Radio, WiFi or fiber broadband to communicate with the rest of the world. On the cloud side, being it public or a private OpenStack, you need to use the latest DevOps tools, Cloud Orchestration tools and containers like Docker, to deploy scale-out queues, real-time stream processing and other Big Data analytics solutions. You need to be able to train deep belief networks and push models to hubs and sensors. Recognise threatening video images. You need to be able to do rolling upgrades and continuous deployment of updates, developer apps, etc. Manage operations of millions of devices and billions of sensors. You want a store. A developer eco-system.
Now when you finally mastered all of this. Make one security mistake and a hacker on the other side of the world is able to control your house, business, city or country.

Now the business opportunities are huge as well. Save a couple of percentage on the production costs of a car and you can save hundreds of millions. Track a global epidemic or the vital signs of a billion people and you can save millions of lives. Give millions of developers a new way to channel their creativity and the Angry Bird of IoT will bring new industries. The one that changes the habits of people will be the next billionaire.

Where is the money? Industrial IoT. Where is the innovation? Home automation and wearables. You can’t pick one. You need to connect innovation with money if you want to lead the IoT revolution. If somebody else does it for you, they can make your solution irrelevant.

Several telecom operators to run into financial problems in the next three years…

November 21, 2014 Leave a comment

In 2017 several telecom operators will run into financial problems, with Vodafone being the most known, unless they start changing today. Why?

The telecom business is a very capital intensive business. Buying spectrum, rolling out the next-generation mobile networks and bringing fiber connections to each home and business is extremely capital intensive. Traditionally operators were the main users of their networks and got large margins on the services that ran on top of them. The truth today is that telecom operators have been completely sidetracked. They no longer have any control of the mobile devices that are used on their networks and neither the services. Data is growing exponentially and is already clogging their networks. A data tsunami is on the horizon. Operators see costs ballooning and ARPU shrinking. There is no way they can start asking substantially more for broadband access. Obama just killed any hope of adding a speed tax on the Internet. The EU wants to kill juicy roaming charges. However the future will be even worse.

New disruptive competitors have entered the market in recent years. Google Fiber is offering gigabit speeds both for uploading and downloading. Youtube and Netflix are generating the majority of Internet traffic in most countries.  Most streaming videos are broadcasted in SD quality. However Netflix is already broadcasting in 4K or ultra high-definition quality on Google Fiber. This means traffic volumes of between 7 to 19GB per hour depending on the codec that is used. Take into account that often different family members can be looking at two or more programmes at the same time. The end result is that today’s networks and spectrum are completely insufficient. Now add the nascent IoT revolution. Every machine on earth will get an IP address and be able to “share its feelings with the world”. Every vital sign of each person in the richer parts of the world will be collected by smart watches and tweeted about on social networks. 90% of the communication that is running inside Facebook’s data centre is machine to machine communication, not user-related communication. Facebook hasn’t even introduced IoT or wearables yet. You can easily imagine them helping even the biggest geek with suggestions on which girl to talk to and what to talk about via augmented reality goggles and with the help of smart watches. Yes it is a crazy example but which telecom marketing department would have given even $1 to Zuckerberg if he would have pitched Facebook to them when it was still known as TheFacebook. It is the perfect example of how “crazy entrepreneurs” make telecom executives look like dinosaurs.

This brings us to the internals on how telecom operators are ran. Marketing departments decide what customers MUST like. Often based on more than doubtful market studies and business plans. In contrast the mobile app stores of this world just let customers decide. Angry Bird might not be the most intelligent app but it sure is a money maker. Procurement departments decide which network and IT infrastructure is best for the company. Ask them what NFV or SDN means and the only thing they can sensibly response is an RFP identifier. Do you really think any procurement department can make a sensible decision on what network technology will be able to compete with Google? More importantly make sure these solutions are deployed at Google speed, integrated at Google speed and scale out at Google speed? If they pick a “Telecom-Grade Feature Monster” that takes years to integrate, then they have killed any chance of that operator being innovative. With all the telecom-grade solutions operators have, why is it that Google’s solutions are more responsive, offer better quality of service and are always available? Vittorio Colao, the Vodafone CEO, was quoted in a financial newspaper yesterday saying Vodafone is going to have to participate in the crazy price war around digital content because BT has moved into mobile. So one of the biggest telecom operators in the world has executive strategies like launching new tariff plans [think RED in 2013], pay crazy money to broadcast football matches, bundle mobile with fixed to be able to discount overall monthly tariffs and erode ARPU even more, etc. If you can get paid millions to just look at what competitors are doing and just badly copy them and dotcoms [the list is long: hosted email, portals, mobile portals, social networks, virtual desktops, IaaS, streaming video, etc.] then please allow me to put your long term viability into question.

So can it actually be done differently. YES, for sure. What if operators would enable customers to customise communication solutions towards their needs. Communication needs have not gone away, if any they augmented. Whatsapp, Google Hangout, etc. are clear examples of how SMS and phone calls can be improved. However they are just the tip of the iceberg of what is possible and what should be done. Network integrated apps via Telco App Stores would give innovators a chance to launch services that customers really like. Hands up who would pay to get rid of their current voicemail? Hands up who really loves their operator’s conference bridge and thinks it is state of the art? Hands up who is of the opinion a bakery is absolute not interested in knowing what its customers think about its products after they have left the shop?

Last week the TAD Summit in Turkey had a very special presentation from Truphone, one of the few disruptive mobile operators in the world. No wonder it won the best presentation award. Truphone, with the help of partners, deployed a telecom solution in minutes that included key components like IMS, SDP, HLR integration, one hundred numbers, dashboards, interactive voice responses, etc. Once deployed, the audience could immediately start calling and participate. All numbers of the people in the audience, their home operator, the operator that sold them their SIM initially, their age and responses to interactive questions were registered and results shown on a real-time dashboard. If the audience would have been in different locations, they could have been put on an interactive map as well. The whole solution took only a few weeks to build with a team of people that all had day jobs. The surprising thing is that it was all build with open source software. It is technically possible to innovate big time in telecom and bring to market new services daily. All at a fraction of today’s cost. The technology is no longer a limiting factor. Old-school thinking, bureaucracy and incompetence are the only things that hold back operators from changing their destiny. Whatever they do, they shouldn’t act like former-Nokia executives in some years and tell the world that Android and the iPhone took them by surprise. Dear mister operator, you have been warned. You have been giving good advise and examples of how to do it better. Now it is time to act upon them…

A Layman’s Guide to the Big Data Ecosystem

November 19, 2014 Leave a comment

Charles – Chuck – Butler, a colleague at Canonical, wrote a very nice blog post explaining the basics of Big Data. It does not only explain them but it also allows anybody to set up Big Data solutions in minutes via Juju. Really recommended reading:

http://blog.dasroot.net/a-laymans-guide-to-the-big-data-ecosystem/

This is a good example of the power of cloud orchestration. Some expert creates charms and bundles them with Juju and afterwards anybody can easily deploy, integrate and scale this Big Data solution in minutes.

Samuel Cozannet, another colleague, used some of these components to create an open source Tweet sentiment analysis solution that can be deployed in 14 minutes and includes autoscaling, a dashboard, Hadoop, Storm, Kafka, etc. He presented it on the OpenStack Developer Summit in Paris and will be providing instructions for everybody to set it up shortly.

IoT success = cutting-edge engineering + out-of-the-box business thinking

November 15, 2014 Leave a comment

Internet of Things, or IoT in short, is going to be the biggest revolution of this century. However the engineering challenges are massive. Billions of low-energy sensors will have to be invented, produced, integrated, operated and maintained. These sensors will talk to millions of gateways over new network protocols. The gateways will communicate via next-generation network devices and software-defined networks to public and private clouds. These clouds will run scale out big data solutions that will dwarf even the biggest supercomputers today. Billions of containers will be running massive micro-services IoT business solutions. One mistake and a hacker from a country you don’t even know to locate can bring down a whole house, a whole company, or worse a whole country.

The business side is looking even less clear. The open source IoT tsunami that is coming means asking for licenses is suicide. Open source hardware and 3D printing result in razor thin hardware margins. Privacy will be central when technology is so central to people’s home and businesses. If it is free then you are the product will become harder if a dot com can know what you are doing in bed, with whom and even if you liked it. Business SLAs will be stricter than ever because customers will expect answers in sub-seconds however human operators will no longer be able to find and fix problems because of the overall system complexities. Roll-outs will be global even if telecom operator’s networks don’t cross borders and electricity plugs and voltage are super local. Ask a customer what IoT solution they want and they will answer you what is IoT.

In these circumstances you might be surprised that I will be focusing on solving these challenges and Telruptive readers will be the first to know when some are resolved…

LXD and Docker

November 11, 2014 Leave a comment

This blog post is not about the technical details around LXC, LXD, Docker, Kubernetes, etc. It focuses on the different use cases LXD and Docker are solving and should help non-experts understand them.
Canonical demoed a prototype of LXD last week at ODS. Several journalists incorrectly understood that LXD is a competitor of Docker. The truth is that LXD is trying to solve a completely different use case than Docker. Ubuntu 14.04 was the first operating system to provide commercial support for Docker. Six times more Docker images are powered by Ubuntu than all other operating systems combined. Ubuntu loves Docker.

Different Use Cases?

Docker is focused on being the universal container for applications. Developers love Docker because it allows them to prototype quickly solutions and share them with others. Docker is best compared to an onion. All internal layers are read-only and only the last layer is writeable. This means that people can quickly reuse Docker containers that were made by others and add their own layers on top if desired. Afterwards you upload your “personalised onion” to the Docker hub hence others can benefit from your work. Docker is ideal for augmenting developer productivity and showing of innovations.
Canonical is the company behind Ubuntu and Ubuntu powers 64% of all OpenStack’s in production, the fastest growing open source project in the world. In OpenStack, like in VMWare or on AWS, you run a hypervisor on a host operating system and then install a guest operating system on top. Because you have two layers of operating systems you can host on one server many applications on multiple operating systems at the same time. This is greatly optimising resource usage over non-virtualization. However because you need to duplicate operating systems you are wasting a lot of resources. Now ideally you could put Docker directly inside OpenStack and run all applications from inside containers. The problem with this is that Docker does not give an administrator the possibility to remotely log into the container and just add some monitoring, backup, etc. and other normal activities administrators do to guarantee SLAs. In comes LXD. LXD is building on top of a container technology called LXC which was used by Docker before. However LXD allows you to have access to a virtual server, just like you would have in case of a hypervisor. The big difference is that LXD does not require operating systems to be duplicated. Instead it partitions the host operating system and assures fair and secure usage between different applications that run inside different containers. The result is that the same server can pack many more applications and startup as well as migrations of applications between different servers becomes extremely fast. This idea is not new. Mainframes already had containers. Solaris had containers. LXD just makes sure that your favourite private cloud has containers that are easy to manage.

Can a hypervisor, Docker and LXD coexist?

Yes. The hypervisor could make sure Windows runs on top of an Ubuntu host [linux containers can not support Windows on top]. Docker containers can host some next generation scale out solution that is either purpose build for Docker or has made changes to support some of the new paradigms Docker introduces. LXD will be best for all your standard Linux workloads that you just want to move as is. No need to update the applications or the tools that get integrated into them.
Since LXD has an Apache licence and is available on Github, it is very likely that the future will actually evolve into a world where LXD and Docker advantages get combined in some shape or form. Hopefully with new innovations being added as well. That is the power of open source innovation and exactly the reason why Canonical has shared LXD with the world…

IoT and personal health

November 9, 2014 Leave a comment

I just saw Eric Dishman’s TED session on “Health care should be a team sport“. I love the idea of providing people with chronicle illness the means to be diagnosed and treated remotely and use big data to learn of a large group of patients with similar issues. Personally this would mean that when my sons have breathing problems we would not have to drag them in the middle of the night to a hospital where they are exposed to many viruses. Instead by measuring their oxygen level and listening to their longs a personalized remote diagnose could be made and some nebulizers or other things administered. At scale all equipment would probably cost less than £200 because Maplin already sells the nebulizer and oxygen level meter for a combined £110. Add another £90 at worst for a stethoscope that can be connected via bluetooth to a smartphone. Now via Hangout a doctor could remotely diagnose the results and even in the future a computer programme. All results of millions of patients would be collected in order to improve treatment. So no need for an expensive hospital in London with a receptionist, nurse and doctor dedicating 2 hours. By just avoiding one hospital night, the whole system would be enormously profitable. Additionally Ubuntu’s Juju can be used to set up all the big data and diagnostic software in minutes in any cloud or server on any place in the world. If other open source solutions are used then the total solution would be in reach for any developing country. There are probably more than one developer whose kids are asthmatic, and would happily contribute time. It sounds like an ideal Gates Foundation or Kickstarter project. If you think you can help please reach out to me because this is not work for me, this is personal engagement.

Follow

Get every new post delivered to your Inbox.

Join 328 other followers

%d bloggers like this: