Archive

Archive for the ‘High Scalability’ Category

Instant Ruby on Rails – Instantly deploy and scale your rails app on any cloud

September 16, 2013 Leave a comment

Many developers and devops are doing a lot of repetitive tasks every day. One of them is deploying a web app and scaling it. We all know the theory for deployment: install an app server, install a database, deploy your app on the app server and your data on the database.

Scaling is also a common problem however several people already have answers for it: put a load balancer in front, duplicate your app server, create database slaves for read only data, create a database cluster for high volumes of writes, use in-memory or NoSQL databases for extremely high write volumes, use memcached for avoiding to go to the database, use Varnish to avoid going to the web server, etc.

So these are not new problems, more like common recurring tasks for devops and developers. What if instant solutions could be made available hence anybody in the world, independent of their level of knowledge, can instantly install a scalable solution?

At Ubuntu we think Open Source blueprint solutions for these common problems should be within everybody’s reach. Instantly deploying and scaling a rails app on any cloud is already a reality: https://juju.ubuntu.com/docs/howto-rails.html. The next step is to make it even easier. One command or drag-and-drop to deploy a complete stack in high-availability. Even one command to have continuous deployment + high-availability at once. This is exactly why we are organizing a contest to win $10,000 with 6 categories. Two of them should be familiar to you now: high-availability and continuous deployment.

Can you imagine the extra time you will gain if all common recurring problems would instantly disappear? Especially if you think what is common and recurring for some experts might be rocket science for the rest of us. If you haven’t played around with Juju, then this is the best time ever…

 

Startups: Want to launch new features hourly?

If you read sites like highscalability.com you will have certainly read about those big name dotcoms that deploy new features to production up to tens of times a day. For most startups bringing features to production is still a manual, at best semi-manual process. You have the odd start-up that has it all automated, but unfortunately this is often a signal that they have too much time on their hands which points towards more critical problems.

What if startups would not have to worry about how to set-up hourly feature deployment? What if they could get an open source solution that delivers them flexible and highly scalable continuous deployment in minutes?

What if Startups could launch new features faster than the top DotComs and scale almost as good?

If this sounds attractive to you or you know a start-up to whom it would be, then you should visit this blog post. Ubuntu has launched a beta program and if enough startups sign up, then they will build an instant and scalable open source continuous deployment solution for them.

A Big Data-Base that is fast but inaccurate: BlinkDB

April 6, 2013 2 comments

The idea might sound strange at first. Why would you want a database that delivers inaccurate data? However BlinkDB trades accuracy for speed. When you query data you can specify when you want the answer, e.g. within 2 seconds, or how accurate you want the answer to be, e.g. 1% error with 95% confidence.

So if you have very large amounts of data (10-100s of Tera Bytes or even Peta Bytes) and you want quick good enough answers then BlinkDB is for you. An early adopter is Facebook. Would you rather have Justin Bieber‘s followers count exactly right in minutes or 99% right as long as your page loads almost instantly? So if you need fast reasonably accurate answers over slow correct answers, BlinkDB is worth checking out.

What can you use BlinkDB for?

  • The obvious use case would be real-time reporting? If you need to take decisions in the blink of an eye, e.g. day traders, and 5-10% error is acceptable, e.g. what is the average change of all commodity prices in the last 2 seconds.
  • Real-time bookings or price comparison in which users want to know the best possible offer but accept some small error margin, e.g. mobile bar-code scanners that deliver product price comparisons in 1 second instead of 10 will dominate the App Store.
  • Any visitor, friends, tweets, total search results, etc. counter on a large website in the world.
  • Any Power Law or Long Tail data in which there are some extremely popular cases, e.g. Justin Bieber followers, or a very large set of infrequent cases, e.g. the number of blogs that have under 1000 visitors per month.
  • Machine Learning solutions and recommendation engines that are using Collaborative Filtering and other types of algorithms that compare an item or user with large groups of other items and users.
  • and many other use cases…

Real-Time Hadoop queries will be a reality in 2013

December 5, 2012 7 comments

Real-Time Hadoop queries will be a reality in 2013 thanks to two new projects from Cloudera: Impala and Trevni.

Impala is the open source version of Dremel, Google’s proprietary big data query solution. A first beta is available and the production version is foreseen for Q1 2013.

Impala allows you to run real-time queries on top of Hadoop’s HDFS, Hbase and Hive. No migrations necessary.

However the real revolution will only get better when Doug Cutting [the creator of Lucene, Hadoop, etc.]‘s Trevni is integrated into Impala. Trevni is a new columnar data storage format that promises superior performance for reading large columnar stored data sets.

Impala+Trevni is promising real-time big data queries with multiple joins that are on par in performance but have more functionality than Google’s Dremel…

Mesos: Your next highly distributed Cloud architecture framework

August 21, 2012 2 comments

I initially complaint about the complexity of installing Mesos when I was playing around with Spark and Shark. However
when I saw the Twitter Mesos and Framework presentation, I understood why Mesos can be disruptive to how you architect applications in a highly distributed manner typical for Cloud Computing.

You can see the presentation here.

The key is that Twitter combined Mesos with Zookeeper, Linux Control Groups and Google’s Protocol Buffers as well as Spark, Storm and Hadoop. This provides them with a way to easily program services that can be scaled to hundreds of mesos nodes, automatically upgraded and restarted in case of failure. Also resource usage can be controlled via the control groups. Zookeeper manages the configuration. Protocol buffers assure efficient communication between nodes. Services can use Spark and Storm for real-time operations and Hadoop for batch. Developers do not have to worry about scaling the services, deploying them to different nodes, etc. This is handled by the Twitter Framework and Mesos master.

There is only one thing to add: “TWITTER PLEASE OPEN SOURCE YOUR TWITTER FRAMEWORK” or in Twitter language: “#mesos please #opensource #twitterfw now @telruptive “…

Hadoop for Real-Time: Spark, Shark, Spark Streaming, Bagel, etc. will be 2012′s new buzzwords

August 15, 2012 5 comments

The website defines Spark as a MapReduce-like cluster computing framework designed to support low-latency iterative jobs. However it would be easier to say that Spark is Hadoop for real-time.

Spark allows you to run MapReduce jobs together with your data on distributed machines. Unlike Hadoop Spark can distributed your data in slices and store it in memory hence your processing and data are co-located in memory. This gives an enormous performance boost. Spark is more than MapReduce however. It offers a new distributed framework on which different distributed computing paradigms can be modelled. Examples are: Hadoop’s Hive => Shark (40x faster than Hive), Google’s Pregel / Apache’s Giraph => Bagel, etc. An upcoming Spark Streaming is supposed to bring real-time streaming to the framework.

The excellent part

Spark is written in Scala and has a very straight forward syntax to run applications from the command line or via compiled code. The possibilities to run iterative operations over large datasets or very compute intensive operations in parallel, make it ideal for big data analytics and distributed machine learning.

The points for improvement

In order to use Spark, you need to install Mesos. Mesos is a framework for distributed computing that was also developed by Berkeley. So in a sense they are eating their own dog food. Unfortunately Mesos is not written in scala so installing Spark becomes a mix of make’s, ant’s, .sh, XML, properties, .conf, etc. It would not be bad if Mesos would have consistent documentation but due to incubation into Apache the installation process is currently undergoing changes and is not straightforward.

Spark allows to connect to Hadoop, Hbase, etc. However running Hadoop on top of Mesos is “experimental” to say the least. The integration with Hadoop should be lighter. At the end only access to HDFS, SequenceFiles, etc. is required. This should not mean that a complete Hadoop should be installed and Spark should be recompiled for each specific Hadoop version.

If Spark wants to become as successful as Hadoop, then they should learn from Hadoop’s mistakes. Complex installation is a big problem because Spark needs to be installed on many machines. The Spark team should take a look at Ruby’s Rubygems, Node.js’s npm, etc. and make the installation simple, ideally via Scala’s package manager, although it is less popular.

If possible the team should drop Mesos as a prerequisite and make it optional. One of Spark’s competitors is Storm & Trident, you can install a Storm cluster in minutes and have a one click command to run Storm on an EC2 cluster.

It would be nice if there would be an integration SDK that allows extensions to be plugged-in. Integrations with Cassandra, Redis, Memcache, etc. could be developed by others. Also looking at a distribution in which Cassandra’s Brisk is used to mimic Hive and HDFS (a.k.a. CassandraFS) and have it all pre-bundled with Shark, could be an option. Spark’s in-memory execution and read speed, combined with Cassandra’s write speed, should make for a pretty quick and scalable solution. Ideally without the need to fight with namenodes, datanodes, jobtrackers, etc. and other Hadoop hard-to-configure inventions…

The conclusion is that distributed computing and programming is already hard enough by itself. Programmers should be focusing on their algorithms and not need a professional admin to get them started.

All-in-all Spark, Shark, Streaming Spark, Bagel, etc. have a lot of potential, it is just a little bit rough around the edges…

Update: I am reviewing my opinion about Mesos. See the Mesos post.

Trident Storm, Real-Time Analytics for Big Data

August 13, 2012 4 comments

In a previous post I mentioned Storm already. Trident is an extension of Storm that makes it an easy-to-use distributed real-time analytics framework for Big Data. Both Trident and Storm were developed by Twitter.

One of Twitter’s major problems is to keep statistics of Tweets and Tweeted URLs that get retweeted by millions of followers. Imagine a famous person who tweets a URL to millions of followers. Lots of followers will retweet the URL. So how do you calculate how many Tweeters have seen the URL? This is important for features like “Top retweeted URLs”.

The answer was Storm but with the addition of Trident, it has become a lot easier to manage. Trident is doing to Storm what Pig and Cascading are doing to Hadoop: simplification. Instead of having to create a lot of Spouts and Bolts and take care of how messages are distributed, Trident comes with a lot of the work already done.

In a few lines of code, you set-up a Distributed RPC server, send it URLs, have it collect the tweeters and followers and count them. Fail-over and resiliance as well as massive distribution throughput are build into the platform. You can see it in this example code:
TridentState urlToTweeters =
topology.newStaticState(getUrlToTweetersState());
TridentState tweetersToFollowers =
topology.newStaticState(getTweeterToFollowersState());

topology.newDRPCStream("reach")
.stateQuery(urlToTweeters, new Fields("args"), new MapGet(), new Fields("tweeters"))
.each(new Fields("tweeters"), new ExpandList(), new Fields("tweeter"))
.shuffle()
.stateQuery(tweetersToFollowers, new Fields("tweeter"), new MapGet(), new Fields("followers"))
.parallelismHint(200)
.each(new Fields("followers"), new ExpandList(), new Fields("follower"))
.groupBy(new Fields("follower"))
.aggregate(new One(), new Fields("one"))
.parallelismHint(20)
.aggregate(new Count(), new Fields("reach"));

The possibilities of Trident + Storm, combined with fast scalable datastores, like for instance Cassandra, are enormous. Everything from real-time counters, filtering, complex event processing, machine learning, etc.
The Storm concept of Spout [data generation] and Bolt [data processing] can be easily understood by most programmers. Storm is an asynchronous highly distributed framework but with a simple distributed RPC server it can easily be used in synchronous code.

The only drawback I have seen is that DRPC is focused only on Strings (and other primitive types that can be contained in a String). Adding more complex objects (via Kryo, Avro, Protocol Buffers, etc.), or at least bytes, would be useful for companies that do not only focus on Tweets.

Cloudify, an Open Source PaaS from GigaSpaces, is making Big Data Clouds easier to manage

May 24, 2012 2 comments

Cloudify, from the scalability experts GigaSpaces, is still its early stages. Unlike Google App Engine, Azure, Heroku, etc. this PaaS is more focused on the application life cycle and not on being a “transparent” application server and database.  The main focus is automating application and services deployment, monitoring, autoscaling, etc. The closest competitor would be Scalr.

Unlike Scalr, Cloudify’s focus is on Cloud-neutrality. Cloudify is not focusing on using specific Amazon services for scalability but instead to make a neutral Cloud platform. The advantage is that every possible Cloud being it private or public can be used and scenarios like hybrid clouds with Cloud bursting from private to public cloud are possible. The deep understanding of large-scale architectures in a company like GigaSpaces is a guarantee that Cloudify will scale in the future.

Cloudify is still missing some important functionality like security, multi-tenancy, integrations with lower-level automation frameworks (e.g. Chef and Puppet), complex upgrade management [e.g. rolling upgrades, MySQL schema upgrades, A/B testing of new features, etc.], etc. However the roadmap is pointing towards most of these items.

Software architects should understand the possibilities Cloudify, Scalr, etc. bring. By having a reusable automation framework companies are able to spend more development and operations time on bringing new business features and less on reinventing the wheel.

 

Rainbird could be Hadoop for Real-Time Analytics if only Twitter would open source it…

May 24, 2012 1 comment

Twitter is having a Real-Time Analytics solution that could easily become as important as Hadoop. They talked about open sourcing it but so far have not done so.

This post is an open invitation to Twitter open source Rainbird and accelerate Real-Time Analytics adoption in the world. Hadoop has changed thousands if not millions of companies. Rainbird could do a similar thing.

In order to gather people around this subject, I am proposing that you include #TWOSRB in your tweets. #TWOSRB stands for Twitter please Open Source RainBird:

//

Open Source Big Data Reporting & ETL show promises

March 16, 2012 1 comment

With Hadoop/Hbase/Hive, Cassandra, etc. you can store and manipulate peta-bytes of data. But what if you want to get nice looking reports or compare data held in a NoSQL solution with data held elsewhere? There have been two market leaders in the Open Source business intelligence space that are putting all their firepower onto Big Data now.

Pentaho Big Data seems to be a bit further ahead. They offer a graphical ETL tool, a report designer and a business intelligence server. These are existing tools but support for Hadoop HDFS, Map-Reduce, Hbase, Hive, Pig, Cassandra, etc. have been added.

Jaspersoft’s Open Source Big Data strategy is a little bit behind because connectors are not included yet into the main product and several are still in beta quality and with missing documentation.

Both companies will accelerate the adoption of big data since the main problem with Big Data is easy reporting. Unstructured data is harder to format into a very structured report than structured data. Any solutions that will make this possible and additionally are Open Source are very welcome in times of cost cutting…

Follow

Get every new post delivered to your Inbox.

Join 299 other followers

%d bloggers like this: