Archive

Archive for the ‘Disrup. Technology’ Category

Why you should care about Kubernetes, Juju, Mesos, etc.

Every day a new orchestration solution is being presented to the world. This post is not about which one is better but about what will happen if you embrace these new technologies.

The traditional scale-up architecture
Before understanding the new solutions, let’s understand what is broken with the current solutions. Enterprise IT vendors have traditionally made software that was sold based on the number of processors. If you were a small company you would have 5 servers, if you were big you would have 50-1000 servers. With the cloud anybody can boot up 50 servers in minutes, so reality has changed. Small companies can manage easily 10000 servers, e.g. think of successful social or mobile startups.

Also software was written optimised for performance per CPU. Many traditional software comes with a long list of exact specifications that need to be followed in order for you to get enterprise support.

Big bloated frameworks are used to manage the thousands of features that are found in traditional enterprise solutions.

The container micro services future
Enterprise software is often hard to use, integrate, scale, etc. This is all the consequence of creating a big monolithic system that contains solutions for as many use cases possible.

In come cloud, containers, micro-services, orchestration, etc. and all rules change.

The best micro services architecture is one where important use cases are reflected in one service, e.g. the shopping cart service deals with your list of purchases however it relies on the session storage service and the identity service to be able to work.

Each service is ran in a micro services container and services can be integrated and scaled in minutes or even seconds.

What benefits do micro services and orchestration bring?
In a monolithic world change means long regression tests and risks. In a micro services world, change means innovation and fast time to market. You can easily upgrade a single service. You can make it scale elastically. You can implement alternative implementations of a service and see which one beats the current implementation. You can do rolling upgrades and rolling rollbacks.

So if enterprise solutions would be available as many reusable services that can all be instantly integrated, upgraded, scaled, etc. then time to market becomes incredibly fast. You have an idea. You implement five alternative versions. You test them. You combine the best three in a new alternative or you use two implementations based on a specific customer segment. All this is impossible with monolithic solutions.

This sounds like we reinvented SOA
Not quite. SOA focused on reusable services but it never embraced containers, orchestration and cloud. By having a container like Docker or a service in the form of a Juju Charm, people can exchange best practice’s instantly. They can be deployed, integrated, scaled, upgraded, etc. SOA only focused on the way services where discovered and consumed. Micro services focus additionally on global reuse, scaling, integration, upgrading, etc.

The future…
We are not quite there yet. Standards are still being defined. Not in the traditional standardisation bodies but via market adoption. However expect in the next 12 months to see micro services being orchestrated at large scale via open source solutions. As soon as the IT world has the solution then industry specific solutions will emerge. You will see communication solutions, retail solutions, logistics solutions, etc. Traditional vendors will not be able to keep pace with the innovation speed of a micro services orchestrated industry specific solution. Expect the SAPs, Oracles, etc. of this world to be in chock when all of a sudden nimble HR, recruiting, logistics, inventory, supplier relationship management solutions, etc. emerge that are offered as SaaS and on-premise often open source. Super easy to use, integrate, manage, extend, etc. It will be like LEGO starting a war against custom made toys. You already know who will be able to be more nimble and flexible…

Software Defined Everything

The other day Taxis in London where on strike because Uber was setting up shop in London. Do you know a lot of people that still send paper letters? Book holiday flights via a travel agent?  Buy books in book stores? Rent DVD movies?

5 smart programmers can bring down a whole multi-billion industry and change people’s habits. It has long been known that any company that changes people habits becomes a multi-billion company. Cereals for breakfast, brown coloured sweet water, throw-away shaving equipment, online bookstore, online search & ads, etc. You probably figured out the name of the brand already.

Software Defined Everything is Accelerating

The Cloud, crowd funding, open source, open hardware, 3D printing, Big Data, machine learning, Internet of Things, mobile, wearables, nanotechnology, social networks, etc. all seem individual technology innovations. However things are changing.

Your Fitbit will send your vital signs via your mobile to the cloud where deep belief networks analyse it and find out that you are stressed. Your smart hub detects you are approaching your garage and your Arduino controller linked to your IP camera encased in a 3D printing housing detects that you brought a visitor. A LinkedIn and Facebook image scan finds that your visitor is your boss’s boss. Your Fitbit and Google Calendar have given away over the last months that whenever you have a meeting with your boss’s boss, you get stressed. Your boss’s boss music preferences are guesses based on public information available on social networks. Your smart watch gets a push notification with the personal profile data that could be gathered from your boss’s boss: he has two boys and a girl, got recently divorced, the girl recently won a chess award, a facebook tagged picture shows your boss in a golf tournament three weeks ago, an Amazon book review indicates that he likes Shakespeare but only the early work, etc. All of a sudden your house shows pictures of that one time you plaid golf. Music plays according to what 96.5% of Shakespeare lovers like from a crowd-funded bluetooth in-house speaker system…

It might be a bit farfetched but what used to be disjoint technologies and innovations are fast coming together. Those companies that can both understand the latest cutting-edge innovations and be able to apply them to improve their customer’s life or solve business problems will have a big competitive edge.

Software is fast defining more and more industries. Media, logistics, telecom, banking, retail, industrial, even agriculture will see major changes due to software (and hardware) innovations.

What should you do? If you are technology savvy?

You should look for customers that want faster horses and draw a picture of a car. Make a slide deck. Get feedback and adjust. Build a prototype. Get feedback and adjust. Create a minimum valuable product. Get feedback and adjust… Change the world.

If you have a business problem and money but are not technology savvy?  

Organise a competition in which you ask people to solve your problem and give prices to the best solution. You will be amazed by what can come out of these.

If you work in a traditional industry and think software is not going to redefine what you do?

Call your investment manager and ask them if you have enough money in the bank to retire in case you would get fired next year and wouldn’t be able to find a job any more. If the answer is no! Then start reading the top of the blog post again…

Fog Computing might Save Operators from an IoT Data Tsunami

July 1, 2014 1 comment

Cisco came up with the term of Fog Computing and The Wall Street Journal has endorsed it, so I guess Fog Computing will become the next hype.

What is Fog Computing?

Internet of Things will embed connectivity into billions of devices. Common thinking says your IoT device is connected to the cloud and shares data for Big Data analytics. However if your Fitbit starts sending your heartbeat every 5 seconds, your thermometer tells the cloud every minute that it is still 23.4 degrees, your car tells the manufacturer its hourly statistics, farmers measure thousands of acres, hospitals measure remote patients health continuously, etc. then your telecom operator will go bankrupt because their network is not designed for this IoT Data Tsunami.

Fog Computing is about taking decisions as close to the data as possible. Hadoop and other Big Data solutions have started the trend to bring processing close to where the data is and not the other way around. Now Fog Computing is about  doing the same on a global scale. You want decisions to be taken as close to where the data is generated and stop it from reaching global networks. Only valuable data should be travelling on global networks. Your Fitbit could sent average heartbeat reports every hour or day and only sent alerts when your heartbeat passed a threshold for some amount of time.

How to implement Fog Computing?

Fog Computing is best done via machine learning models that get trained on a fraction of the data on the Cloud. After a model is considered adequate then the model gets pushed to the devices. Having a Decision Tree or some Fuzzy Logic or even a Deep Belief Network run locally on a device to take a decision is lots cheaper than setting up an infrastructure in the Cloud that needs to deal with raw data from millions of devices. So there are economical advantages to use Fog Computing. What is needed are easy to use solutions to train models and send them to highly optimised and low resource intensive execution engines that can be easily embedded in devices, mobile phones and smart hubs/gateways.

Fog Computing is also useful for Non-IoT

Also network elements should become a lot more intelligent. When was the last time you were on a large event with many people around you. Can you imagine any event in the last 24 months where WiFi was working brilliantly? Most of the time WiFi works in the morning when people are still getting in but soon after it stops working. Fog Computing can be the answer here. You only need to analyse data patterns and take decisions on what takes up lots of data. Chances are that all the mobiles, tablets and laptops that are connected to the event WiFi have Dropbox or some other large file sharing enabled. You take some pictures of things on the event and since you are on WiFi the network gets saturated by a photo sharing service that is not really critical for the event. Fog Computing would detect this type of bandwidth abuse and would limit it or even block it. At the moment this has to be done manually but computers would do a lot better job at it. So Software Defined Networking should be all over Fog Computing.

Telecom Operators and Equipment Manufacturers Should Embrace Fog Computing

Telecom operators should heavily invest in Fog Computing by making Open Source standards that can be easily embedded in any device and managed from any cloud. When I say standards, I don’t mean ETSI. I mean organise a global Fog Computing competition with a $10 million award for the best open source Fog Computing solution. Make a foundation around it with a very open license, e.g. Apache License. Invite and if necessary oblige all telecom and general network suppliers to embed it.

The alternatives are…

Not solving this problem will provoke heavy investment in global networks that carry 90% junk data and an IoT Data Tsunami. Solving this problem via network traffic shaping is a dangerous play in which privacy and net neutrality will come up earlier than later. You can not block Dropbox, YouTube or Netflix traffic globally. It is a lot easier if everybody blocks what is not needed or at least minimises such traffic themselves. Most people have no idea how to do it. Creating easy to use open source tools would be a first good step…

Adrian Cockcroft: “Google will not be a huge factor in enterprise computing”

Adrian was speaking at Gigaom’s Structure event and one detail of Gigaom’s article struck my attention. According to them Adrian thinks: ” Google will not be a huge factor in enterprise computing”.

How can it be that one of the biggest technology companies, owner of the most advanced distributed systems in the world and the inventor of cloud computing for internal use, can not get enterprise computing?

Why is Google’s Cloud not ready for Enterprise Computing?

1) Cloud-only vision

Google is the only of the three that has a Cloud-only vision. The two others understand that enterprises will not drop everything their doing and moving overnight all systems to the cloud. Without a “VPC” or hybrid cloud vision, Google is going nowhere.

2) Focused on the visionnaires

API solutions for mobile, prediction, etc. are all good and well but most enterprises don’t know what oAuth and REST mean. They are still stuck in the Corba, J2EE/RMI, Dotnet, etc. era. Yes Google has Apps, Gmail, etc. and they can compete with Office, Exchange, etc. but most enterprise software is customised for Office integration, not yet for Apps integration.

3) Lack of exit strategy

If you are a challenger you need to convince enterprises that the risk of moving to your platform is worth it. The best strategy is to say that people can easily go back. When AWS was only starting, you had Eucalyptus being the exit strategy. What will a CTO do when Google’s prediction API becomes too expensive? In enterprises the expression has always been: “Nobody ever got fired for choosing [fill in SAP, Microsoft, Oracle, etc.]“. AWS is the dominant player. Without an exit strategy Google is a big risk for enterprises.

4) Lack of trust

Google’s Gmail is famous for reading your emails and putting targeted ads on the Internet. Snowden, the NSA and Google scare non-American enterprises.

Solutions?

1) Cheapest without free movement is worthless

Google is starting a price war but AWS and Azure have done a good job at locking people into their services APIs. Google should work on multi-cloud solutions that allow people to convert any software into as-a-Service, a.k.a. Anything-as-a-Service / XaaS. Make people independent of the cloud provider and price becomes the most important aspect. There are solutions already for XaaS, you just need to know where to look.

2) On-Site Option

Google should embrace OpenStack and make sure it delivers on-par with the market leader VMWare but more importantly make sure that there is a one click option to move between OpenStack on-premise and the Google cloud as well as vice-versa.

3) Easy path from yesterday to tomorrow

Are you hooked on Exchange, Oracle, SAP, etc.? There should be easy migration tools as well as solutions to encapsulate the past and make it work with the future. Instant legacy integration is possible. Again you just need to know where to look.

4) Trust & SLA

One simple message: “Google will not spy on you and will give you the best SLA of the cloud industry”.

How the Cloud makes Windows irrelevant

June 5, 2014 2 comments

Windows has been running on the majority of PCs for many years now. Microsoft successfully translated its client monopoly into a stronghold server position. However times are changing and it is no surprise that the new CEO of Microsoft is a Cloud expert. Cloud can make Windows irrelevant.

Why?
On the cloud you no longer use a client-server architecture. HTML5 has come a long way and is close to feature parity with most Windows GUI applications. HTML5 means that you can do mobile, tablet and PC without installation or client-side version management. This means that Salesforce, Google Apps, Workday and other SaaS solutions have become enterprise successes overnight. Mobile first means Android and iOS first.

However the cloud is also bringing deeper changes. Innovation has never been cheaper. You don’t need to invest in anything. Hardware is almost for free. Software solutions are just an API away. Storage is infinite. Distribution is global.

Mobile game companies were the first to experience overnight successes whereby on Monday they launched 2 servers and by Sunday they managed 5000.

The next frontier will be business software. Small and nimble SaaS players will become overnight successes. Their software stacks will be different however. SQL Server and even worse Oracle and DB2 database clusters are no longer enough. They technically don’t scale. They financially don’t make sense. They are extremely hard to manage compared to nimble alternatives.

Windows on the server is in no better shape. Docker and CoreOS are promising lightweight fast scale out. Ubuntu’s Juju is showing instant integration everywhere. The operating system is fast becoming a liability instead of an asset. Restarts of minutes to upgrade are not in line with 24×7 100% SLAs. In a time where each container tries to be as small and efficient as possible and upgrades need to be transactional and expressed in micro seconds, Windows is no longer the platform of choice. The cloud gave Ubuntu, an open source Linux operating system, up to 70% market share and growing. Remember what happened with Netscape and Real Player the moment Windows reached 80-90% penetration.

So what should Microsoft do?
The first thing is acknowledge the new reality and embrace & extend Linux. Many companies would love to migrate their .Net solutions to efficient Linux containers. Office on Linux Desktops is overdue. Why not give governments open source desktop solutions? They will gladly pay millions to boost their national pride. China did. Why would India, Russia, France, Germany, Brazil, Spain, Italy, Turkey, Saudi Arabia, Israel and the UK be different. Active Directory, Sharepoint and Exchange will loose market dominance if they do not embrace Linux. Windows phones with a Linux core could actually run Android apps and would level the playing field. Linux developers have been secretly jealous of the easiness to build great looking GUI apps. A Visual Studio for .Net on Linux and let’s be disruptive Go lang, Rails and Python would win developers mind share.
IoT and embedded solutions that hold a Microsoft Linux kernel would make Android swet.
Microsoft Open Source solutions in which you get the platform for free but developers can resell apps and extensions will deliver Microsoft  revenue shares, support and customisation revenues. Pivotal is showing how to do just this. Instant SaaS/PaaS enablement and integration solutions are hot but CloudFoundry is not a Windows play.

But all of this is unlikely to thrive if Microsoft would keep its current internal structures. Just plainly buying some Linux thought leaders is unlikely to be enough. Microsoft could inspire itself in EMC where most people don’t know that RSA, VMWare and Pivotal all float into the same pockets. Consulting services & sales from one company are rewarded for selling products owned by the group. Office, Cloud, Phone, IoT and Business Software as independent units that can each determine how they interact with the Windows and Linux business units would accelerate innovation.

Let’s see if Redmond is up for change. The new CEO at least seems to have vastly improved chances of change…

The future of Big Data is linked to Cloud

Data volumes are growing exponentially. Unstructured data from Twitter, LinkedIn, Mailling Lists, etc. has the potential to transform many industries if it could be combined with structured data. Machine learning, natural language processing, sentiment analysis, etc. everybody talks about them, hardly anybody is really using them at scale. Too many people when they talk about Big Data unfortunately start with the answer and then ask what the problem it. The answer seems to be Hadoop. News flash: Hadoop is not the answer and if you start from the answer to look for problems then you are doing it wrong.

What are Common Data Problems?

Most Big Data problems are about storage and reporting. How do I store all the exponentially growing data in such a way that business managers can get to in seconds when they need it? Ad-hoc reporting, adequate prediction, and making sense of the exponentially growing data stream are the key problems.

Big Data Storage?

Do you have relational data, unstructured data, graph data, etc.? How do you store different types of data and make it available inside an enterprise? The basics for big data storage is cloud storage technology. You want to store any type of data and be able to quickly scale up storage. RedHat did not buy Inktank for $175M because traditional storage has solved all of today’s problems. Premium SAN and other storage technologies are old school. They are too expensive for Big Data. They were designed with the idea that each byte of data is critical for an enterprise. Unfortunately this is no longer the case. You mind loosing transactional sales data. You don’t mind so much loosing sample tweets you bought from Datasift or Apache log files from an internal low-impact server. This is where cloud storage solutions like Inktank’s Ceph allow commodity storage to be built that is reliable, scalable and extremely cost effective. Does this mean you don’t need SANs any more? Wrong again. TV did not kill Radio. Same here.

Cloud storage technologies are needed because each type of data behaves differently. If you have log data that only is appended then HDFS is fine. If you have read-mostly data then a relational database is ideal. If you have write-mostly data then you need to look at NoSQL. If you need heavy read-and-write then you need strong Big Data architecture skills. What is more important: short latency, consistency, reliability, cheap storage, etc.? Each of these means that the solution is different. No latency means in-memory or SSD. Consistency means transactional. Reliability means replication. You can even now find inconsistent databases like BlinkDB. There is no longer one size fits all. Oracle is no longer the answer to everybody’s data questions.

What will companies need? Companies need cloud storage solutions that offer these different storage capabilities like a service. Amazon’s RDS, DynamoDB, S3 and Redshift are examples of what companies need. However companies need more flexibility. They need to be able to migrate their data between public cloud providers to optimise their costs and have added security. They also need to be able to store data in private local clouds or nearby hosted private clouds for latency or regulatory reasons.

The future of ETL & BI

Traditional ETL will see a revolution. ETL never worked. Business managers don’t want to go and ask their IT department to make a change in a star schema in order to import some extra data from the Internet followed by updates to reports and dashboards. Business managers want an easy to use tool that can answer their ad-hoc queries. This is the reason why Tableau Software + Amazon Redshift are growing like crazy. However if your organisation is starting to pump terabytes of data into Redshift, please be warned: The day will come that Amazon sends you a bill that your CxO will not want to pay and he/she will want you to move out of Amazon. What will you do then? Do you have an exit strategy?

The future of ETL and BI will be web tools that any business manager can use to create ad-hoc reports. The Office generation wants to see dynamic HTML5 GUIs that allow them to drag-and-drop data queries into ad-hoc reports and dashboards. If you need training then the tool is too difficult.

These next-generation BI tools will need dynamic back-office solutions that allow storing real-time, graph, blob,  historical relational, unstructured, etc. data into a commonly accessible cloud storage solution. Each one will be hosted by a different cloud service but they will all be an API away. Software will be packaged in such a way that it knows how to export its own data. Why do you need to know where Apache stores the access and error logs and in which format? Apache should be able to export whatever interesting information it contains in a standardised way into some deep storage. Machine learning should be used to make decisions on how best to store that data for ad-hoc reporting afterwards. Humans should no longer be involved in this process.

Talking about machine learning. With the volumes of data growing from gigabytes into petabytes, traditional data scientists will not scale. In many companies a data scientist is similar to a report monkey: “Find out why in region X we sold Y% less”, etc. Data scientist should not be synonymous for dynamic report generators. Data scientists should be machine learning experts. They should tell the computer what they want, not how they want it. Today’s data scientists pride themselves they know R, Python, etc. These tools are too low-level to be usable at scale. There are just not enough people in the world to learn R. Data is growing exponentially, R experts at best can grow linear. What we need are machine learning GUI solutions like RapidMiner Studio but supported by Petabyte cloud solutions. A short term solution could be an HTML5 GUI version of RapidMiner Studio that connects to a back-end set of cloud services that use some of the nice Apache Spark extensions for machine learning, streaming, Big Data warehousing/SQL, graph retrieval, etc. or solutions based on Druid.io. For sure there are other solutions possible.

What is important is that companies start realising that data is becoming a strategic weapon. Those companies that are able to collect more of it and convert it into valuable knowledge and wisdom will be tomorrow’s giants.  Most average machine learning algorithms become substantially better just by throwing more and more data at them. This means that having a Big Data architecture is not as critical as having the best trained models in the industry and continue to train them. There will be a data divide between the have’s and have-not’s. Google, Facebook, Microsoft and others have been buying any startup that smells like Deep Belief Networks. They have done this with a good reason. They know that tomorrow’s algorithms and models will be more valuable than diamonds and gold. If you want to be one of the have’s then you need to invest in cloud storage now. You need to have massive historical data volumes to train tomorrow’s algorithms and start building the foundations today…

 

The end of cloud 1.0 is near

Google started a cloud price war and AWS and Azure are responding. The result will be the end of cloud 1.0 and the beginning of 2.0.

What is cloud 1.0?
Cloud 1.0 is all about giving customers the basics like storage, compute, api provisioning, load balancing, autoscaling, software defined networking, etc.

AWS is the undisputed winner of cloud 1.0 by haven taken the majority of the market. However prices are about to drop substantially. This will mean that compute and storage will become commodity.

Now what will Cloud 2.0 bring. It depends on who you ask. Here are some of the answers:

AWS Cloud 2.0
We are the cloud standard. You should come to us because we have abstracted every possible low-level service behind an easy to consume API. You don’t have to worry about a thing…

Google Cloud 2.0
Amazon is build on old school technology. We invented Cloud, just come to us and you will get superior technology and lightning speed. If you don’t want to pay but are willing to let us peep inside your VM then we are even willing to give cloud for free. Remember Google is good…

Azure Cloud 2.0
We are the easiest cloud to work with. If you have a Windows data centre then you can have a Windows cloud data centre in minutes.

Mom & Pops Cloud 2.0
We were cheaper than Amazon before but no longer. We are less innovative. Cash flow problem……..tut….tut…tut…crack

CTO Cloud 2.0
It is clear that having my own data centre is not a good business. Unless I need a private cloud for legal reasons, why not start using the public cloud. But which one and how can I move from one to another.

My Cloud 2.0
People need solutions that allow switching between different public and private clouds and find a common solution that is easy to use and very price competitive, ideally free.

If you want to see my Cloud 2.0 become reality and have used too many priority Cloud services, you want to take a look at what we are doing at Juju.ubuntu.com

Instant Solutions for Common Problems

Today Ubuntu officially launched Juju bundles and Juju quickstart. What does this mean? Via a Juju bundle anybody can create a blueprint solution and allow others to make instant copies. You drag and drop a bundle or you use a one line command and you can in minutes deploy a complete software stack, pre-integrated and scaled, onto any cloud or server.

Many problems can be solved this way. Just the type of solutions that are included in the below blog post show what is possible. Anything from instant SaaS integration, Big Data, programming environments, ERP, Cobol integrations, Telecom solutions, etc. can all be instantly deployed, integrated and scaled.

Check it out:

http://insights.ubuntu.com/news/juju-bundles-and-quickstart-create-an-entire-cloud-environment-in-seconds/

Making the World More Secure, Instantly…

December 23, 2013 Leave a comment

This week a new Juju Lab was launched: Instant Single Sign-on and 2-Factor Authentication. The Juju Lab is a new direction for Juju Innovation in which a community of contributors builds a revolutionizing solution for a common problem. This time the problem is how to make the world more secure instantly. Juju Labs works like Kickstarter. Either goals are met and the project becomes a full Juju solution or the project dies.

Future Juju Labs are being considered. Everything from enterprise Java auto-tuning, instantly scaling PHP, instant legacy integration, instant BI, etc. As long as it solves a common problem in an exponentially better way, creating a Juju Lab is an option.

The main problem is how can you quickly evaluate which common problem to tackle first. Any ideas are welcome…

The Ryanairs Of Telecom are Here…

December 17, 2013 Leave a comment

After years of virtually no innovation from telecom operators, 2014 will be different. Not because telecom dinosaurs have all of a sudden become lean mean innovation machines. Quite the contrary. Most operators are still focusing on rolling out THIS YEAR’s (instead of today’s) “innovative” service which will be just a copycat of some famous dotcom.

So why the excitement?
2014 will be the pivot year. The year that will be marked in history books as the year old school lost and innovators won.

The first Ryanair-like disruptive telecoms will leave their borders and start bankrupting “traditional telecoms”. Cross-platform voice/video 4G apps will reach the tipping point. Cloud Telco PaaS will be reality. Individual communication solutions or iCommunication will be a reality. Web 3.0 will include voice & video communication. NFV will be driven by non-telecom players. WAN SDN will be deployed by more than only Google, Amazon, etc. Cloud Media Streaming will reach the tipping point. Internet of things will meet Cloud will meet Big Data will meet Mobile will meet disruptive communication solutions. Early adopters paradise…

2014 will be an exciting year for those that love telecom innovation!!! Bit pipe nightmares becoming reality for others.

Follow

Get every new post delivered to your Inbox.

Join 300 other followers

%d bloggers like this: