Archive

Archive for the ‘Innovation’ Category

IoT Revolution: SnApp Stores for any THINGS

January 20, 2015 Leave a comment

Canonical, the company behind Ubuntu, just announced the biggest IoT innovation in history: SnApp Stores for any THINGS. Any THING can run apps from an associated Snapp Store. It is just like having apps on a mobile phone but instead apps run on any THING.

What does this mean?

Developers will be able to create apps with Snappy Ubuntu Core – Snappy Apps or Snapps – and run them on any THING. The list of THINGS is only limited by people’s imagination. It can be vacuum cleaners, fridges, dishwashers, coffee machines, alarm systems, robots, drones, set top boxes, HVAC, WiFi, switches, routers, telecom mobile base stations, agricultural irrigation controllers, swimming pool controllers, industrial appliances, medical equipment, digital signage, POS, ATMs, smart energy meters, cars, radios, TVs, IP cameras, clouds, 3D printers, virtual reality wearables, smart hubs and any next-generation device that can run Ubuntu Core and still needs to be invented. If it has an ARMv7 or X86 chip and 256MB or better then you can put a Snapp Store on it.

Apps made mobile phones go from stupid calling devices to personalised smart super computers many of us would not be able to live without. New industries were born. Complete industries revolutionized. The app revolution is about to be repeated but this time any THING is a target.

Imagine what will happen if all devices in your home, at work, in your city, on holidays, etc. go from stupid to smart and personalised.  Your house will know if you are stressed before you enter the door. It will play the music it knows relaxes you, the coffee smell you prefer, the ideal temperature & light intensity, block calls you don’t want, have the house cleaned, your favourite food just minutes away from being delivered, grocery shopping done, that interesting TV series just waiting to entertain you, etc. Your energy bill will be lower, your car will adapt to you, your hover will collaborate with the alarm system, your pet will be fed the right diet, your children will have personalised parental control, your mail packages delivered where you are, etc.

Snapps will only be limited by your imagination so start dreaming now about what the Snapp Store should bring you an make your dreams come true at ubuntu.com/things.

 

My Internet of Things

December 30, 2014 2 comments

The Internet of Things (IoT) is impersonal. My lamp, dishwasher, heater, sprinkler, etc. are all islands with a closer border policy than North Korea. Even the first generation of IoT devices is still autistic. Current devices only know how to talk to “their app” or “their cloud”. The solution is not to have open APIs or standards but to go a step further. We need IoT apps everywhere. When you buy a phone, it is the same phone as millions of others are having. However something magically happens when you connect it to its app store/marketplace. The phone goes from an iPhone/Android to a miPhone/mydroid. We need a dishwasher, vacuum cleaner and heater to be personal as well. The easiest way is to create a MyIoT experience with IoT apps everywhere.

Why would your vacuum cleaner need apps?
Your vacuum cleaner should be able to know your house the moment you unpack it because your alarm system and your heater should tell it how big your house is. Your Smarthub should guide your vacuum cleaner from day one. Your smart phone and Google calendar should tell it when you are away and when it is a good moment to clean. Your smart watch should tell it that when it jumped on while you where there that the spike in your heartbeat means that its sound is annoying and it should stop immediately. No single company will make solutions this complex. So what we need is the ability to add apps to every sort of thing. This way the Internet of Things becomes My Internet of Things.

The next IT revolution: micro-servers and local cloud

Have you ever counted the number of Linux devices at home or work that haven’t been updated since they came out of the factory? Your cable/fibre/ADSL modem, your WiFi point, television sets, NAS storage, routers/bridges, media centres, etc. Typically this class of devices hosts a proprietary hardware platform, an embedded proprietary Linux and a proprietary application. If you are lucky you are able to log into a web GUI often using the admin/admin credentials and upload a new firmware blob. This firmware blob is frequently hard to locate on hardware supplier’s websites. No wonder the NSA and others love to look into potential firmware bugs. They are the ideal source of undetected wiretapping.

The next IT revolution: micro-servers
The next IT revolution is about to happen however. Those proprietary hardware platforms will soon give room for commodity multi-core processors from ARM, Intel, etc. General purpose operating systems will replace legacy proprietary and embedded predecessors. Proprietary and static single purpose apps will be replaced by marketplaces and multiple apps running on one device. Security updates will be sent regularly. Devices and apps will be easy to manage remotely. The next revolution will be around managing millions of micro-servers and the apps on top of them. These micro-servers will behave like a mix of phone apps, Docker containers, and cloud servers. Managing them will be like managing a “local cloud” sometimes also called fog computing.

Micro-servers and IoT?
Are micro-servers some form of Internet of Things. Yes they can be but not all the time. If you have a smarthub that controls your home or office then it is pure IoT. However if you have a router, firewall, fibre modem, micro-antenna station, etc. then the micro-server will just be an improved version of its predecessor.

Why should you care about micro-servers?
If you are a mobile app developer then the micro-servers revolution will be your next battlefield. Local clouds need “Angry Bird”-like successes.
If you are a telecom or network developer then the next-generation of micro-servers will give you unseen potentials to combine traffic shaping with parental control with QoS with security with …
If you are a VC then micro-server solution providers is the type of startups you want to invest in.
If you are a hardware vendor then this is the type of devices or SoCs you want to build.
If you are a Big Data expert then imagine the new data tsunami these devices will generate.
If you are a machine learning expert then you might want to look at algorithms and models that are easy to execute on constraint devices once they have been trained on potentially thousands of cloud servers and petabytes of data.
If you are a Devop then your next challenge will be managing and operating millions of constraint servers.
If you are a cloud innovator then you are likely to want to look into SaaS and PaaS management solutions for micro-servers.
If you are a service provider then this is the type of solutions you want to have the capabilities to manage at scale and easily integrate with.
If you are a security expert then you should start to think about micro-firewalls, anti-micro-viruses, etc.
If you are a business manager then you should think about how new “mega micro-revenue” streams can be obtained or how disruptive “micro- innovations” can give you a competitive advantage.
If you are an analyst or consultant then you can start predicting the next IT revolution and the billions the market will be worth in 2020.

The next steps…
It is still early days but expect some major announcements around micro-servers in the next months…

Instant Ruby on Rails – Instantly deploy and scale your rails app on any cloud

September 16, 2013 Leave a comment

Many developers and devops are doing a lot of repetitive tasks every day. One of them is deploying a web app and scaling it. We all know the theory for deployment: install an app server, install a database, deploy your app on the app server and your data on the database.

Scaling is also a common problem however several people already have answers for it: put a load balancer in front, duplicate your app server, create database slaves for read only data, create a database cluster for high volumes of writes, use in-memory or NoSQL databases for extremely high write volumes, use memcached for avoiding to go to the database, use Varnish to avoid going to the web server, etc.

So these are not new problems, more like common recurring tasks for devops and developers. What if instant solutions could be made available hence anybody in the world, independent of their level of knowledge, can instantly install a scalable solution?

At Ubuntu we think Open Source blueprint solutions for these common problems should be within everybody’s reach. Instantly deploying and scaling a rails app on any cloud is already a reality: https://juju.ubuntu.com/docs/howto-rails.html. The next step is to make it even easier. One command or drag-and-drop to deploy a complete stack in high-availability. Even one command to have continuous deployment + high-availability at once. This is exactly why we are organizing a contest to win $10,000 with 6 categories. Two of them should be familiar to you now: high-availability and continuous deployment.

Can you imagine the extra time you will gain if all common recurring problems would instantly disappear? Especially if you think what is common and recurring for some experts might be rocket science for the rest of us. If you haven’t played around with Juju, then this is the best time ever…

 

Lean Canvas: As if you were home 24×7 – The iDoor

January 12, 2013 1 comment

In this post I want to show a technique that is an alternative for creating a business case: “Lean Canvas”. Lean Canvas has been proposed by the book: “Running Lean“, that itself is based on “Lean Startup“.

The idea of Lean Canvas is to put what would go into the executive summary of a business case on one page and to forget about writing the rest of the business case. The justification is that writing a business case takes 2 to 3 months and CxOs normally only read the executive summary. So instead of spending 2 to 3 months, you spend hours or days and get it in front of customers to get feedback. With the feedback you can then refine your idea and create a Minimum Valuable Product in the same 2 to 3 months. So instead of having a nice paper report nobody reads, you can start earning money.

leancanvas The Lean Canvas contains the major customer problems you want to solve. These customer problems need to be important [A painkiller, not a vitamin], shared by many and not have an easy workaround. Customers need to validate them before you start thinking about solutions. Customers are the ideal party to tell you about their problems but not necessarily the best to give you ideas about a solution. Think about Henry Ford’s words: “If I had asked what people wanted they would have said faster horses…”. Most startups focus excessively on the solution and forget that they need to validate a lot more things. After the problem, the second most important part of the Lean Canvas is the customer segment and channel. Who do you want to offer a product to and how to do reach them. Also the unique value proposition is key. The other elements of the Lean Canvas are the unfair advantage [how can I avoid others to just copy my business?], key metrics [how can I measure success?] and last but not least the cost structure [what does it cost to acquire a customer, build a minimum valuable product, etc.?] and revenue streams [how much am I going to charge and what other revenue sources are there]. You can create a Lean Canvas on paper or use a SaaS-version.

So far the theory, now let’s review an example…

The customer problems:

Door keys are a nuisance. You can lose them. You have to give copies to family and friends if you want them to go to your house if you are not there. Do you really want to give the cleaning lady or man a copy? Is my lock safe from burglars?

The mailman or delivery guy comes to my home but often packages do not fit my mailbox.

When people ring my bell, they know when I am not home. That is unsafe.

So what is the solution?

My proposed solution is the iDoor. The iDoor is an intelligent door which you control remotely to decide who accesses, who delivers and who is shut-out. Via a camara and full-duplex audio system, you are able to see who is standing in front of your door and communicate with them. Your smartphone will be your remote door manager. Advanced models could have face recognition and share data with other intelligent doors in the neighbourhood, hence if you are sleeping a siesta and those annoying door to door vendors approach your door they will automatically hear a message to go away and your bell will not function. If a burglar is detected, then the police can be warned. If the postman has a big package then remotely you can open a compartment so they can store the package. If your family comes they can go into the house without problems. Your cleaning lady can as well, as long as it is her normal working hours and she comes alone.

Unique value proposition?

As if you were home 24×7. Busy people will never miss an Amazon package again. Burglars will not know if you are in the garden or not home at all.

Customer segments?

Mid-high class house owners.

Channels?

An existing door manufacturer that targets upper markets should be partnered with. An example could be Hörnmann.

Unfair Advantage?

TBD.

Key Metrics?

Door sales and door usage.

Cost Structure?

A complete costing has to be done. TBD.

Revenue Streams?

Door sales and door installation/maintenance services are the primary revenue stream. However door apps and selling anonymous aggregated data could be additional sources.

Summary:

You can find a quick summary in the following slides as well as some details about the technology components. This example needs customer validation and several areas need quite some more work [e.g. cost, revenue, unfair advantage, etc.]. However I hope the idea is clear.

Maarten Ectors is a senior executive specialised in value innovation: creating new products and generating new revenues based on cutting-edge technologies like Big Data, Cloud, etc. He is currently looking for new challenges. You can contact him at: maarten at telruptive dot com.

Mesos: Your next highly distributed Cloud architecture framework

August 21, 2012 2 comments

I initially complaint about the complexity of installing Mesos when I was playing around with Spark and Shark. However
when I saw the Twitter Mesos and Framework presentation, I understood why Mesos can be disruptive to how you architect applications in a highly distributed manner typical for Cloud Computing.

You can see the presentation here.

The key is that Twitter combined Mesos with Zookeeper, Linux Control Groups and Google’s Protocol Buffers as well as Spark, Storm and Hadoop. This provides them with a way to easily program services that can be scaled to hundreds of mesos nodes, automatically upgraded and restarted in case of failure. Also resource usage can be controlled via the control groups. Zookeeper manages the configuration. Protocol buffers assure efficient communication between nodes. Services can use Spark and Storm for real-time operations and Hadoop for batch. Developers do not have to worry about scaling the services, deploying them to different nodes, etc. This is handled by the Twitter Framework and Mesos master.

There is only one thing to add: “TWITTER PLEASE OPEN SOURCE YOUR TWITTER FRAMEWORK” or in Twitter language: “#mesos please #opensource #twitterfw now @telruptive “…

Hadoop for Real-Time: Spark, Shark, Spark Streaming, Bagel, etc. will be 2012’s new buzzwords

August 15, 2012 5 comments

The website defines Spark as a MapReduce-like cluster computing framework designed to support low-latency iterative jobs. However it would be easier to say that Spark is Hadoop for real-time.

Spark allows you to run MapReduce jobs together with your data on distributed machines. Unlike Hadoop Spark can distributed your data in slices and store it in memory hence your processing and data are co-located in memory. This gives an enormous performance boost. Spark is more than MapReduce however. It offers a new distributed framework on which different distributed computing paradigms can be modelled. Examples are: Hadoop’s Hive => Shark (40x faster than Hive), Google’s Pregel / Apache’s Giraph => Bagel, etc. An upcoming Spark Streaming is supposed to bring real-time streaming to the framework.

The excellent part

Spark is written in Scala and has a very straight forward syntax to run applications from the command line or via compiled code. The possibilities to run iterative operations over large datasets or very compute intensive operations in parallel, make it ideal for big data analytics and distributed machine learning.

The points for improvement

In order to use Spark, you need to install Mesos. Mesos is a framework for distributed computing that was also developed by Berkeley. So in a sense they are eating their own dog food. Unfortunately Mesos is not written in scala so installing Spark becomes a mix of make’s, ant’s, .sh, XML, properties, .conf, etc. It would not be bad if Mesos would have consistent documentation but due to incubation into Apache the installation process is currently undergoing changes and is not straightforward.

Spark allows to connect to Hadoop, Hbase, etc. However running Hadoop on top of Mesos is “experimental” to say the least. The integration with Hadoop should be lighter. At the end only access to HDFS, SequenceFiles, etc. is required. This should not mean that a complete Hadoop should be installed and Spark should be recompiled for each specific Hadoop version.

If Spark wants to become as successful as Hadoop, then they should learn from Hadoop’s mistakes. Complex installation is a big problem because Spark needs to be installed on many machines. The Spark team should take a look at Ruby’s Rubygems, Node.js’s npm, etc. and make the installation simple, ideally via Scala’s package manager, although it is less popular.

If possible the team should drop Mesos as a prerequisite and make it optional. One of Spark’s competitors is Storm & Trident, you can install a Storm cluster in minutes and have a one click command to run Storm on an EC2 cluster.

It would be nice if there would be an integration SDK that allows extensions to be plugged-in. Integrations with Cassandra, Redis, Memcache, etc. could be developed by others. Also looking at a distribution in which Cassandra’s Brisk is used to mimic Hive and HDFS (a.k.a. CassandraFS) and have it all pre-bundled with Shark, could be an option. Spark’s in-memory execution and read speed, combined with Cassandra’s write speed, should make for a pretty quick and scalable solution. Ideally without the need to fight with namenodes, datanodes, jobtrackers, etc. and other Hadoop hard-to-configure inventions…

The conclusion is that distributed computing and programming is already hard enough by itself. Programmers should be focusing on their algorithms and not need a professional admin to get them started.

All-in-all Spark, Shark, Streaming Spark, Bagel, etc. have a lot of potential, it is just a little bit rough around the edges…

Update: I am reviewing my opinion about Mesos. See the Mesos post.

Follow

Get every new post delivered to your Inbox.

Join 336 other followers

%d bloggers like this: