IoT is one of the biggest potential new revenue streams but also one of the most challenging technical problems we have today.
the technical challenges
IoT is not only sensors + Big Data analytics + cloud + short-range low-energy networking and Internet. The real problem is that you have to be good at many different technologies that used to be separate and that one mistake can have disastrous effects. You have to be good at miniature sensors that need to be able to run two years on one tiny battery and use software that even the biggest geek hates to work on. At making sure IPv6 networking is adjusted to this small footprint devices with innovations like CoAP and 6LoWPAN. To learn about the world of micro-controllers open source hardware like Arduino, micro-computing platforms like Raspberry Pi and Edison, ARM Cortex, Intel Quark, etc. You also need to know about new and old low-energy networking technologies like Zigbee, Bluetooth Low-Energy, etc. Afterwards you want your sensors to be connected to a hub because otherwise you would need a SIM or Wifi in each sensor which would drain battery. So you need to make a smart hub that ideally can run apps from different developers and can support lots of new devices. However you also want devices to support peer-to-peer technologies like Thread or new standards from Intel, Qualcomm or any of the numerous standardisation bodies. You want to use 3D printing to print an attractive casing. You want to use crowd-funding to sell your Smarthub. You want mobile apps to work flawlessly with IoT. You need to know about Powerline, gesture control, in-building location tracking, voice control, etc. if you want to compete with the best smart hubs. You now need to know GPRS, 3G, 4G, White Spaces, Long-Haul Radio, WiFi or fiber broadband to communicate with the rest of the world. On the cloud side, being it public or a private OpenStack, you need to use the latest DevOps tools, Cloud Orchestration tools and containers like Docker, to deploy scale-out queues, real-time stream processing and other Big Data analytics solutions. You need to be able to train deep belief networks and push models to hubs and sensors. Recognise threatening video images. You need to be able to do rolling upgrades and continuous deployment of updates, developer apps, etc. Manage operations of millions of devices and billions of sensors. You want a store. A developer eco-system.
Now when you finally mastered all of this. Make one security mistake and a hacker on the other side of the world is able to control your house, business, city or country.
Now the business opportunities are huge as well. Save a couple of percentage on the production costs of a car and you can save hundreds of millions. Track a global epidemic or the vital signs of a billion people and you can save millions of lives. Give millions of developers a new way to channel their creativity and the Angry Bird of IoT will bring new industries. The one that changes the habits of people will be the next billionaire.
Where is the money? Industrial IoT. Where is the innovation? Home automation and wearables. You can’t pick one. You need to connect innovation with money if you want to lead the IoT revolution. If somebody else does it for you, they can make your solution irrelevant.
Charles – Chuck – Butler, a colleague at Canonical, wrote a very nice blog post explaining the basics of Big Data. It does not only explain them but it also allows anybody to set up Big Data solutions in minutes via Juju. Really recommended reading:
This is a good example of the power of cloud orchestration. Some expert creates charms and bundles them with Juju and afterwards anybody can easily deploy, integrate and scale this Big Data solution in minutes.
Samuel Cozannet, another colleague, used some of these components to create an open source Tweet sentiment analysis solution that can be deployed in 14 minutes and includes autoscaling, a dashboard, Hadoop, Storm, Kafka, etc. He presented it on the OpenStack Developer Summit in Paris and will be providing instructions for everybody to set it up shortly.
Internet of Things, or IoT in short, is going to be the biggest revolution of this century. However the engineering challenges are massive. Billions of low-energy sensors will have to be invented, produced, integrated, operated and maintained. These sensors will talk to millions of gateways over new network protocols. The gateways will communicate via next-generation network devices and software-defined networks to public and private clouds. These clouds will run scale out big data solutions that will dwarf even the biggest supercomputers today. Billions of containers will be running massive micro-services IoT business solutions. One mistake and a hacker from a country you don’t even know to locate can bring down a whole house, a whole company, or worse a whole country.
The business side is looking even less clear. The open source IoT tsunami that is coming means asking for licenses is suicide. Open source hardware and 3D printing result in razor thin hardware margins. Privacy will be central when technology is so central to people’s home and businesses. If it is free then you are the product will become harder if a dot com can know what you are doing in bed, with whom and even if you liked it. Business SLAs will be stricter than ever because customers will expect answers in sub-seconds however human operators will no longer be able to find and fix problems because of the overall system complexities. Roll-outs will be global even if telecom operator’s networks don’t cross borders and electricity plugs and voltage are super local. Ask a customer what IoT solution they want and they will answer you what is IoT.
In these circumstances you might be surprised that I will be focusing on solving these challenges and Telruptive readers will be the first to know when some are resolved…
This blog post is not about the technical details around LXC, LXD, Docker, Kubernetes, etc. It focuses on the different use cases LXD and Docker are solving and should help non-experts understand them.
Canonical demoed a prototype of LXD last week at ODS. Several journalists incorrectly understood that LXD is a competitor of Docker. The truth is that LXD is trying to solve a completely different use case than Docker. Ubuntu 14.04 was the first operating system to provide commercial support for Docker. Six times more Docker images are powered by Ubuntu than all other operating systems combined. Ubuntu loves Docker.
Different Use Cases?
Docker is focused on being the universal container for applications. Developers love Docker because it allows them to prototype quickly solutions and share them with others. Docker is best compared to an onion. All internal layers are read-only and only the last layer is writeable. This means that people can quickly reuse Docker containers that were made by others and add their own layers on top if desired. Afterwards you upload your “personalised onion” to the Docker hub hence others can benefit from your work. Docker is ideal for augmenting developer productivity and showing of innovations.
Canonical is the company behind Ubuntu and Ubuntu powers 64% of all OpenStack’s in production, the fastest growing open source project in the world. In OpenStack, like in VMWare or on AWS, you run a hypervisor on a host operating system and then install a guest operating system on top. Because you have two layers of operating systems you can host on one server many applications on multiple operating systems at the same time. This is greatly optimising resource usage over non-virtualization. However because you need to duplicate operating systems you are wasting a lot of resources. Now ideally you could put Docker directly inside OpenStack and run all applications from inside containers. The problem with this is that Docker does not give an administrator the possibility to remotely log into the container and just add some monitoring, backup, etc. and other normal activities administrators do to guarantee SLAs. In comes LXD. LXD is building on top of a container technology called LXC which was used by Docker before. However LXD allows you to have access to a virtual server, just like you would have in case of a hypervisor. The big difference is that LXD does not require operating systems to be duplicated. Instead it partitions the host operating system and assures fair and secure usage between different applications that run inside different containers. The result is that the same server can pack many more applications and startup as well as migrations of applications between different servers becomes extremely fast. This idea is not new. Mainframes already had containers. Solaris had containers. LXD just makes sure that your favourite private cloud has containers that are easy to manage.
Can a hypervisor, Docker and LXD coexist?
Yes. The hypervisor could make sure Windows runs on top of an Ubuntu host [linux containers can not support Windows on top]. Docker containers can host some next generation scale out solution that is either purpose build for Docker or has made changes to support some of the new paradigms Docker introduces. LXD will be best for all your standard Linux workloads that you just want to move as is. No need to update the applications or the tools that get integrated into them.
Since LXD has an Apache licence and is available on Github, it is very likely that the future will actually evolve into a world where LXD and Docker advantages get combined in some shape or form. Hopefully with new innovations being added as well. That is the power of open source innovation and exactly the reason why Canonical has shared LXD with the world…
I just saw Eric Dishman’s TED session on “Health care should be a team sport“. I love the idea of providing people with chronicle illness the means to be diagnosed and treated remotely and use big data to learn of a large group of patients with similar issues. Personally this would mean that when my sons have breathing problems we would not have to drag them in the middle of the night to a hospital where they are exposed to many viruses. Instead by measuring their oxygen level and listening to their longs a personalized remote diagnose could be made and some nebulizers or other things administered. At scale all equipment would probably cost less than £200 because Maplin already sells the nebulizer and oxygen level meter for a combined £110. Add another £90 at worst for a stethoscope that can be connected via bluetooth to a smartphone. Now via Hangout a doctor could remotely diagnose the results and even in the future a computer programme. All results of millions of patients would be collected in order to improve treatment. So no need for an expensive hospital in London with a receptionist, nurse and doctor dedicating 2 hours. By just avoiding one hospital night, the whole system would be enormously profitable. Additionally Ubuntu’s Juju can be used to set up all the big data and diagnostic software in minutes in any cloud or server on any place in the world. If other open source solutions are used then the total solution would be in reach for any developing country. There are probably more than one developer whose kids are asthmatic, and would happily contribute time. It sounds like an ideal Gates Foundation or Kickstarter project. If you think you can help please reach out to me because this is not work for me, this is personal engagement.
It is not often that one is responsible for cloud [and Big Data and IoT] strategy in a company of 600 people and you get told by the OpenStack foundation that your solution went from 55% market share to 64% while competitors like RedHat, HP, VMWare, etc. are spending hundreds [or more] of times more on marketing and engineering than you. Now I would love to claim responsibility for it but I would be lying. My mentors, Mark Shuttleworth and Simon Wardley, have laid the foundations years before I joined the company. But Ubuntu and Canonical, the company behind it, are the poster child example of why promoting chief financial officers into strategic roles in the last ten years was a terrible idea. Bean counters are about to inflict potentially irreparable damage onto iconic hardware and legacy software vendors. The reason is really easy: disruptive innovation. The innovator’s dilemma explained it years ago already. When some initial inferior technology comes along like Cloud Computing and OpenStack, then existing vendors will not get any demand from existing customers. Only when technology matures will customers start defecting en masse. But then already other companies have years of a head-start. Add to it that Ubuntu OpenStack is not only the most innovative solutions but also wants to be the most flexible [see our Autopilot, OIL, MAAS and Juju for more details] and the cheapest. So if you are on a quarter-based projected revenue track and you find out that your competitor is doing those three things extremely well, then it might be time to brush up those skills and experiences on your CV. Regarding the future, let me just tell you that the best is still to come :-)