Archive

Posts Tagged ‘big data’

Software Defined Everything

The other day Taxis in London where on strike because Uber was setting up shop in London. Do you know a lot of people that still send paper letters? Book holiday flights via a travel agent?  Buy books in book stores? Rent DVD movies?

5 smart programmers can bring down a whole multi-billion industry and change people’s habits. It has long been known that any company that changes people habits becomes a multi-billion company. Cereals for breakfast, brown coloured sweet water, throw-away shaving equipment, online bookstore, online search & ads, etc. You probably figured out the name of the brand already.

Software Defined Everything is Accelerating

The Cloud, crowd funding, open source, open hardware, 3D printing, Big Data, machine learning, Internet of Things, mobile, wearables, nanotechnology, social networks, etc. all seem individual technology innovations. However things are changing.

Your Fitbit will send your vital signs via your mobile to the cloud where deep belief networks analyse it and find out that you are stressed. Your smart hub detects you are approaching your garage and your Arduino controller linked to your IP camera encased in a 3D printing housing detects that you brought a visitor. A LinkedIn and Facebook image scan finds that your visitor is your boss’s boss. Your Fitbit and Google Calendar have given away over the last months that whenever you have a meeting with your boss’s boss, you get stressed. Your boss’s boss music preferences are guesses based on public information available on social networks. Your smart watch gets a push notification with the personal profile data that could be gathered from your boss’s boss: he has two boys and a girl, got recently divorced, the girl recently won a chess award, a facebook tagged picture shows your boss in a golf tournament three weeks ago, an Amazon book review indicates that he likes Shakespeare but only the early work, etc. All of a sudden your house shows pictures of that one time you plaid golf. Music plays according to what 96.5% of Shakespeare lovers like from a crowd-funded bluetooth in-house speaker system…

It might be a bit farfetched but what used to be disjoint technologies and innovations are fast coming together. Those companies that can both understand the latest cutting-edge innovations and be able to apply them to improve their customer’s life or solve business problems will have a big competitive edge.

Software is fast defining more and more industries. Media, logistics, telecom, banking, retail, industrial, even agriculture will see major changes due to software (and hardware) innovations.

What should you do? If you are technology savvy?

You should look for customers that want faster horses and draw a picture of a car. Make a slide deck. Get feedback and adjust. Build a prototype. Get feedback and adjust. Create a minimum valuable product. Get feedback and adjust… Change the world.

If you have a business problem and money but are not technology savvy?  

Organise a competition in which you ask people to solve your problem and give prices to the best solution. You will be amazed by what can come out of these.

If you work in a traditional industry and think software is not going to redefine what you do?

Call your investment manager and ask them if you have enough money in the bank to retire in case you would get fired next year and wouldn’t be able to find a job any more. If the answer is no! Then start reading the top of the blog post again…

The future of Big Data is linked to Cloud

Data volumes are growing exponentially. Unstructured data from Twitter, LinkedIn, Mailling Lists, etc. has the potential to transform many industries if it could be combined with structured data. Machine learning, natural language processing, sentiment analysis, etc. everybody talks about them, hardly anybody is really using them at scale. Too many people when they talk about Big Data unfortunately start with the answer and then ask what the problem it. The answer seems to be Hadoop. News flash: Hadoop is not the answer and if you start from the answer to look for problems then you are doing it wrong.

What are Common Data Problems?

Most Big Data problems are about storage and reporting. How do I store all the exponentially growing data in such a way that business managers can get to in seconds when they need it? Ad-hoc reporting, adequate prediction, and making sense of the exponentially growing data stream are the key problems.

Big Data Storage?

Do you have relational data, unstructured data, graph data, etc.? How do you store different types of data and make it available inside an enterprise? The basics for big data storage is cloud storage technology. You want to store any type of data and be able to quickly scale up storage. RedHat did not buy Inktank for $175M because traditional storage has solved all of today’s problems. Premium SAN and other storage technologies are old school. They are too expensive for Big Data. They were designed with the idea that each byte of data is critical for an enterprise. Unfortunately this is no longer the case. You mind loosing transactional sales data. You don’t mind so much loosing sample tweets you bought from Datasift or Apache log files from an internal low-impact server. This is where cloud storage solutions like Inktank’s Ceph allow commodity storage to be built that is reliable, scalable and extremely cost effective. Does this mean you don’t need SANs any more? Wrong again. TV did not kill Radio. Same here.

Cloud storage technologies are needed because each type of data behaves differently. If you have log data that only is appended then HDFS is fine. If you have read-mostly data then a relational database is ideal. If you have write-mostly data then you need to look at NoSQL. If you need heavy read-and-write then you need strong Big Data architecture skills. What is more important: short latency, consistency, reliability, cheap storage, etc.? Each of these means that the solution is different. No latency means in-memory or SSD. Consistency means transactional. Reliability means replication. You can even now find inconsistent databases like BlinkDB. There is no longer one size fits all. Oracle is no longer the answer to everybody’s data questions.

What will companies need? Companies need cloud storage solutions that offer these different storage capabilities like a service. Amazon’s RDS, DynamoDB, S3 and Redshift are examples of what companies need. However companies need more flexibility. They need to be able to migrate their data between public cloud providers to optimise their costs and have added security. They also need to be able to store data in private local clouds or nearby hosted private clouds for latency or regulatory reasons.

The future of ETL & BI

Traditional ETL will see a revolution. ETL never worked. Business managers don’t want to go and ask their IT department to make a change in a star schema in order to import some extra data from the Internet followed by updates to reports and dashboards. Business managers want an easy to use tool that can answer their ad-hoc queries. This is the reason why Tableau Software + Amazon Redshift are growing like crazy. However if your organisation is starting to pump terabytes of data into Redshift, please be warned: The day will come that Amazon sends you a bill that your CxO will not want to pay and he/she will want you to move out of Amazon. What will you do then? Do you have an exit strategy?

The future of ETL and BI will be web tools that any business manager can use to create ad-hoc reports. The Office generation wants to see dynamic HTML5 GUIs that allow them to drag-and-drop data queries into ad-hoc reports and dashboards. If you need training then the tool is too difficult.

These next-generation BI tools will need dynamic back-office solutions that allow storing real-time, graph, blob,  historical relational, unstructured, etc. data into a commonly accessible cloud storage solution. Each one will be hosted by a different cloud service but they will all be an API away. Software will be packaged in such a way that it knows how to export its own data. Why do you need to know where Apache stores the access and error logs and in which format? Apache should be able to export whatever interesting information it contains in a standardised way into some deep storage. Machine learning should be used to make decisions on how best to store that data for ad-hoc reporting afterwards. Humans should no longer be involved in this process.

Talking about machine learning. With the volumes of data growing from gigabytes into petabytes, traditional data scientists will not scale. In many companies a data scientist is similar to a report monkey: “Find out why in region X we sold Y% less”, etc. Data scientist should not be synonymous for dynamic report generators. Data scientists should be machine learning experts. They should tell the computer what they want, not how they want it. Today’s data scientists pride themselves they know R, Python, etc. These tools are too low-level to be usable at scale. There are just not enough people in the world to learn R. Data is growing exponentially, R experts at best can grow linear. What we need are machine learning GUI solutions like RapidMiner Studio but supported by Petabyte cloud solutions. A short term solution could be an HTML5 GUI version of RapidMiner Studio that connects to a back-end set of cloud services that use some of the nice Apache Spark extensions for machine learning, streaming, Big Data warehousing/SQL, graph retrieval, etc. or solutions based on Druid.io. For sure there are other solutions possible.

What is important is that companies start realising that data is becoming a strategic weapon. Those companies that are able to collect more of it and convert it into valuable knowledge and wisdom will be tomorrow’s giants.  Most average machine learning algorithms become substantially better just by throwing more and more data at them. This means that having a Big Data architecture is not as critical as having the best trained models in the industry and continue to train them. There will be a data divide between the have’s and have-not’s. Google, Facebook, Microsoft and others have been buying any startup that smells like Deep Belief Networks. They have done this with a good reason. They know that tomorrow’s algorithms and models will be more valuable than diamonds and gold. If you want to be one of the have’s then you need to invest in cloud storage now. You need to have massive historical data volumes to train tomorrow’s algorithms and start building the foundations today…

 

Solving the pressing need for Linux talent…

September 17, 2013 Leave a comment

The Linux Foundation shared the below infographics recently.  Click on it and you get the associated report. The short message is, if you are an expert in Linux you are in high demand because companies don’t find enough experts due to the Cloud and Big Data boom.

Unless cloning machines are discovered later this year, quickly expanding the number of Linux experts is unlikely to happen. This means total cost of ownership for enterprises is likely to rise. This is ironic since Linux is all about open source and providing some of the most amazing solutions for free.

The obvious alternative is to focus on Microsoft products. They are relatively cheap in total cost of ownership since licenses are “payable” and average Windows skills can be easier found.

However Microsoft is loosing the server war, especially in the web application space. So this is not a winning strategy if you are going to do Cloud.

How to solve the pressing need for Linux talent?

The only possible strategy is to lower the number of experts needed per company. Larger companies always will need some but they should be focused on the “interesting high-value tasks”. This concept of interesting and high-value is key. With the number of cloud servers exploding, we can not expect the number of experts to explode.

Open source products like Puppet and Chef have helped to alleviate the pain for the more “skilled” companies. One DevOp was able to manage more than ten times as many machines as before. Unfortunately these server provisioning tools are not for the faint of heart. They  require experts that know both administration and coding.

It is time for the next generation of tools. Ubuntu, the number #1 Cloud operating system, is leading the way with Juju. If Linux wants to continue to be successful then the common problems, the boring problems, the repetitive problems, etc. should be solved. Solved by Linux gurus in such a way that we, the less IT gifted, can get instant solutions for these common problems.

We need a Linux democracy in which the lesser skilled, but unfortunately the majority, can instantly reuse best-in-class blueprint solutions. Juju is a new class of tools that gives you instant solutions. For all those common problems: scaling a web application, monitoring your infrastructure, sharding MongoDB, replicating a database, installing a Hadoop cluster, setting up continuous integration, etc. Juju can offer solutions. The individual software components have been “charmed”. A Charm allows the software to be instantly deployed, integrated and scaled. However the real revolution is just starting. Juju will have bundles pretty soon. Technically speaking, a bundle is a collection of pre-configured and integrated Charms. In lays speak, a bundle is an instant solution for a common problem. You instantly deploy a bundle [one command or drag-and-drop] and you get a blue-print solution. Since Juju is open source, the community can create as many instant solutions as there are common problems.

So if you want to scale your IT solutions without stretching neither your budget or cloning your employees and without the lock-in of any proprietary and expensive commercial software, then you should try Juju today. Play with the GUI or install Juju today.

MapD – Massively Parallel GPU-based database

An MIT student recently created a new type of massively distributed database, one that runs on graphical processors instead of CPUs. Mapd, as it has been called, makes use of the immense computational power available in off-the-shelf graphics cards that can be found in any laptop or PC. Mapd is especially suitable for real-time quering, data analysis, machine learning and data visualization. Mapd is probably only one of many databases that will try new hardware configurations  to cater for specific application use cases.

Alternative approaches could focus on large sets of cheap mobile processors, Parallella processors, Raspberry PIs, etc. all stitched together. The idea would be to create massive processing clouds based on cheap specialized hardware that could beat traditional CPU Clouds both in price and performance at least for some specific use cases…

5 Strategies for Making Money with the Cloud

January 22, 2013 1 comment

Everybody is hearing Cloud Computing on the television now. Operators will store your contacts in the Cloud. Hosting companies will host your website in the Cloud. Others will store your photos in the Cloud.

However how do you make money with the Cloud?

The first thing is to forget about infrastructure and virtualization. If you are thinking that in 2013, the world needs more IaaS providers then you haven’t seen what is currently on offer (Amazon, Microsoft, Google, Rackspace, Joyent, Verizon/Terramark, IBM, HP, etc.).

So what are alternative strategies:

1) Rocket Internet SaaS Cloning

Your best hope is SaaS and PaaS. The best markets are non-English speaking markets. We have seen an explosion of SaaS in the USA but most have not made it to the rest of the world yet. Only some bigger SaaS solutions (Webex, GoToMeeting, Office 365, etc.)  and PaaS platforms (Salesforce, Workday, etc.) are available outside of the US and the UK. However most SaaS and PaaS solutions are currently still English-only. So the quickest solution to make some money is to just copy, translate and paste some successful English-only SaaS product. If you do not know how to copy dotcoms, take a look at how the Rocket Internet team is doing it. Of course you should always be open for those annoying problems everybody has that could use a new innovative solution and as such create your own SaaS.

2) SaaSification

During the gold rush, be the restaurant, hotel or tool shop. While everybody is looking for the SaaS gold, offer solutions that will save gold diggers time and money. SaaSification allows others to focus on building their SaaS business, not on reinventing for the millionth time a web page, web store, email server, search, CRM, monthly subscription billing, reporting, BI, etc. Instead of a “Use Shopify to create your online store”, it should be “Use <YOUR PRODUCT> to create a SaaS Business”.

3) Mobile & Cloud

Everybody is having, or at least thinking about buying, a Smartphone. However there are very few really good mobile services that fully exploit the Cloud. Yet I can get a shopping list app but most are just glorified to-do lists. None is recommending me where to go and buy based on current promotions and comparison with other buyers. None is helping me find products inside a large supermarket. None is learning from my shopping habits and suggesting items on the list. None is allowing me to take a number at the seafood queue. These are just examples for one mobile + cloud app. Think about any other field and you are sure to find great ideas.

4) Specialized IaaS

I mentioned it before, IaaS is already overcrowded but there is one exception: specialized IaaS. You can focus on specialized hardware, e.g. virtualized GPU, DSP, mobile ARM processors. On network virtualization like SDN and Openflow. Mobile and tablet virtualization. Embedded device virtualization. Machine Learning IaaS. Car Software virtualization.

5) Disruptive Innovations + Cloud

Selling disruptive innovations and offering them as Cloud services. Examples could be 3D printing services, wireless sensor networks / M2M, Big Data, Wearable Tech, Open Source Hardware, etc. The Cloud will lower your costs and give you a global elastically scalable solution.

Big Data 2013 Predictions

January 1, 2013 5 comments

If you just invested a lot of money in a Big Data solution from any of the traditional BI vendors (Teradata, IBM, Oracle, SAS, EMC, HP, etc.) then you are likely to see a sub-optimal ROI in 2013.

Several innovations will come in 2013 that will change the value of Big Data exponentially. Other technology innovations are just waiting for smart start-ups to put them into good use.

Real-Time Hadoop

The first major innovation will be Google’s Dremel-like solutions coming of age like Impala, Drill, etc. They will allow real-time queries on Big Data and be open source. So you will get a superior offering compared to what is currently available for free.

Cloud-Based Big Data Solutions

The absolute market leader is Amazon with EMR. Elastic Map Reduce is not so much about being able to run a Map Reduce operation in the Cloud but about paying for what you use and not more. The traditional BI vendors are still getting their head around a usage-based licensing for the Cloud. Except a lot of smart startups to come up with really innovative Big Data and Cloud solutions.

Big Data Appliances

You can buy some really expensive Big Data Appliances but also here disruptive players are likely to change the market. GPUs are relatively cheap. Stack them into servers and use something like Virtual OpenCL to make your own GPU virtualization cluster solution. These type of home-made GPU clusters are already being used for security Big Data related work.

Also expect more hardware vendors to pack mobile ARM processors into server boxes. Dell, HP, etc. are already doing it. Imagine the potential for Distributed Map Reduce.

Finally Parallella will put a 16-core supercomputer into everybody’s hands for $99. Their 2013 supercomputer challenge is definitely something to keep your eyes on. Their roadmap talks about 64 and 1000 core versions. If Adapteva can keep their promises and flood the market with Parallella’s then expect Parallella Clusters to be 2013 Big Data Appliance.

Distributed Machine Learning

Mahout is a cool project but Map Reduce might not be the best possible architecture to run iterative distributed backpropagation or any other machine learning algorithms. Jubatus looks promising. Also algorithm innovations like HogWild could really change the dynamics for efficient distributed machine learning. This space is definitely ready for more ground-breaking innovations in 2013.

Easier Big Data Tools

This is still a big white spot in the Open Source field. Having Open Source and easy to use drag-and-drop tools for Big Data Analytics would really excel the adoption. We already have some good commercial examples (Radoop = RapidMiner + Mahout, Tableau, Datameer, etc.) but we are missing good Open Source tools.

I am currently looking for new challenges so if you are active in the Big Data space and are looking for a knowledgable senior executive be sure to contact me at maarten at telruptive dot com.

How can I generate new revenues from my data is the wrong question…

December 27, 2012 1 comment

With Big Data in the news all day, you would think that having a lot of high quality data is a guarantee for new revenues. However asking yourself how to generate new revenues from existing data is the wrong question. It is a sub-optimal question because it is like having a hammer and assuming everything else is a nail.

A better question to ask is:”What data insight problems potential customers have that I could solve?” Read more…

Scaling Machine Learning

October 17, 2012 1 comment

There is currently still a vacuum for easy & scalable solutions in the machine learning space.

At the moment everybody is talking about Hadoop as the de-facto standard for Big Data. Unfortunately Hadoop is not a real-time system. Map-reduce can be used for batch machine learning like training a Logistic Regression/Support Vector Machine/Neural Network, Batch Gradient Descent, etc. However when it comes to real-time predictions it is not the platform of choice. Additionally Java is loosing every day its status of preferred language. New machine learning algorithms are more likely to be developed in R, Scala, Python, Go etc. There is of course Mahout which is scalable but the word “easy” is not a synonym.

If you want to create your own algorithms but do not want to go low-level Java Map-Reduce, then there are some alternatives like Pig [for the SQL-minded], Cascading [Java but easy and allows test driven development!], Scalding [Scala on top of Cascading, made by Twitter. Could be combined with libraries like Scalala for easy vector and matrix similar to Matlab], etc.

What other options are there?
Storm could be an option for time series, predictions based on a pre-trained model, online learning algorithms, etc. However what is missing is an extension like Trident, but for distributed machine learning, that avoids having to reinvent the wheel. A sort of Mahout for Storm.

Spark is another option. But Mesos is still very early days and also here a Mahout for Spark would be a good addition. In comparison with Storm, Spark would be ideal for training complex machine learning algorithms that need to iterate millions of times over the same data set.

Graphlab can be an option for those who are looking for social network analytics or other graph-based machine learning.

If you wanted to work with R then you could use packages like Snow or Parallel. But this would mean you need to reinvent a lot of distributed management of processing nodes. Both packages just incorporate the basic functions to launch some external processing nodes but are lacking professional management of a large cluster. You could also look at RHadoop, as long as you are fine with non-real-time on top of Hadoop. For alternatives for RHadoop you could look at Rhipe. Segue is R + Amazon Elastic Map Reduce, etc.
Update: an interesting extension for R (i.e. pbd) has just been released that promises R execution on over 10.000 cores. Read more about is here.
What is missing?

Simplicity, easy to use & reusable. What is needed is a solution that is cross-platform (R, Scala, Java, Python, Matlab, etc.). With a visual interface like RapidMiner or Knime, that allows 80% of the work to be drag-and-drop. With a re-useable library of the most used algorithms for prediction, clustering, classification, outlier detection, dimension reduction, normalization, etc. Ideally with a marketplace for sharing data and algorithms. With an easy interface to manage your data and create reports, think similar to Datameer. Ideally integrated with tools for data cleaning (e.g. Google’s Refine) and ETL (e.g. Pentaho, Talend, Jasper Reports, etc.). But most of all with a powerful distributed engine that allows both batch processing [Hadoop] and real-time [e.g. Storm]. And finally with a one click install.

If my requirements are missing some important aspects, let me know. If you want to construct such a system, please contact me…

Trident Storm, Real-Time Analytics for Big Data

August 13, 2012 4 comments

In a previous post I mentioned Storm already. Trident is an extension of Storm that makes it an easy-to-use distributed real-time analytics framework for Big Data. Both Trident and Storm were developed by Twitter.

One of Twitter’s major problems is to keep statistics of Tweets and Tweeted URLs that get retweeted by millions of followers. Imagine a famous person who tweets a URL to millions of followers. Lots of followers will retweet the URL. So how do you calculate how many Tweeters have seen the URL? This is important for features like “Top retweeted URLs”.

The answer was Storm but with the addition of Trident, it has become a lot easier to manage. Trident is doing to Storm what Pig and Cascading are doing to Hadoop: simplification. Instead of having to create a lot of Spouts and Bolts and take care of how messages are distributed, Trident comes with a lot of the work already done.

In a few lines of code, you set-up a Distributed RPC server, send it URLs, have it collect the tweeters and followers and count them. Fail-over and resiliance as well as massive distribution throughput are build into the platform. You can see it in this example code:
TridentState urlToTweeters =
topology.newStaticState(getUrlToTweetersState());
TridentState tweetersToFollowers =
topology.newStaticState(getTweeterToFollowersState());

topology.newDRPCStream("reach")
.stateQuery(urlToTweeters, new Fields("args"), new MapGet(), new Fields("tweeters"))
.each(new Fields("tweeters"), new ExpandList(), new Fields("tweeter"))
.shuffle()
.stateQuery(tweetersToFollowers, new Fields("tweeter"), new MapGet(), new Fields("followers"))
.parallelismHint(200)
.each(new Fields("followers"), new ExpandList(), new Fields("follower"))
.groupBy(new Fields("follower"))
.aggregate(new One(), new Fields("one"))
.parallelismHint(20)
.aggregate(new Count(), new Fields("reach"));

The possibilities of Trident + Storm, combined with fast scalable datastores, like for instance Cassandra, are enormous. Everything from real-time counters, filtering, complex event processing, machine learning, etc.
The Storm concept of Spout [data generation] and Bolt [data processing] can be easily understood by most programmers. Storm is an asynchronous highly distributed framework but with a simple distributed RPC server it can easily be used in synchronous code.

The only drawback I have seen is that DRPC is focused only on Strings (and other primitive types that can be contained in a String). Adding more complex objects (via Kryo, Avro, Protocol Buffers, etc.), or at least bytes, would be useful for companies that do not only focus on Tweets.

Data Analytics as a Service

April 18, 2012 2 comments

Every company is using Microsoft Office and especially Excel to do some sort of data analytics. However data volumes have grown exponentially and have outgrown Spreadsheets. You need experts in the business domain, in data analytics, in data migration/extraction/transformation/loading, in server management, etc. to get data analytics done on Big Data scale. This makes it expensive and only usable for the happy few.

Why? There must be easier ways to do it.

I think there are. For those unfamiliar with data analytics but eager to learn, you should take a look at a product called RapidMiner. It is close to amazing how a non-expert is able to use Neural Networks, Decision Trees, Support Vector Machines, Genetic Algorithms, etc. and get meaningful results in minutes. The amazing part is also that RapidMiner is open source hence for usage by 1 analyst it is free.

Rapid-i.com, the company behind RapidMiner, also offers server software to run data analytics remotely. It is here where big data opportunities meet easy data analytics. What if RapidMiner data analytics could be ran on hundreds of servers in parallel and you pay by usage just as you pay for any Cloud compute and storage instances?

RapidMiner as a Service

RapidMiner as a Service, RMaaS, would allow millions of business people to be able to analyse Big Data “without Big Investments”. This type of Data Analytics as a Service would provide any SME with the same data analytics tools as large corporations. Data could come from Amazon S3, Amazon’s DynamoDB, Hosted Hadoops, any webservices, any social network, etc.

Visual as a Service

RapidMiner as a Service is only one of the many domain specific tools that could be offered as a visual drag-and-drop Cloud service. VAS as a Service is another example in which complex telecom assets can be easily combined in a drag-and-drop manner. There are many more. These services will be the real revolution of Cloud Computing since they combine IaaS/PaaS/SaaS into a new generation of solutions that bring large savings for new users and potential large revenues for their providers…

Follow

Get every new post delivered to your Inbox.

Join 300 other followers

%d bloggers like this: