Because of the popularity of the first comparison between ELK and Graylog a second post was in order, and a page on the Graylog documentation gives a perfect example to talk about the difference between ELK and Graylog.

The page is located here:
http://docs.graylog.org/en/2.4/pages/ideas_explained.html

In that page it explains that the vision of Graylog was started after taking offense of the price of an unnamed products quote.  From that moment Graylog was created to be an alternative to the unnamed product in the log management market and it is purpose built to be a log management application and has been so since its beginning.

The ELK stack was not purposefully created to be a log management application, it just happens to be good at it as well as a many other uses.  Elasticsearch was created first and it is at its core a full text search engine based on Apache Lucene.  It can store data yes, but its purposes is to provide search services to that data.  A large number of applications and websites use Elasticsearch….including Graylog….to provide the back end data storage/search capabilities.  Then Logstash was introduces to process data (logs just so happen to be a type of data) and Kibana introduced later to look at that data (with logs again only being one type of data).

ELK deals with data of which logs are a type of data.  Graylog deals with logs.  ELK is a powerful swiss army knife, Graylog is a fine tuned purpose built application with a narrow focus.  Have requirements inside the narrow focus of only logs and Graylog works great.  Want to do Business Intelligence, graphing of trends, complex visualizations to illustrate a concept, or even something such as acting a a full blown netflow collection and visualization engine?  Buy something else to do it or use ELK.

This is the key differentiation.  In more traditional IT departments logs are only seen as logs, something to search for and to troubleshoot with.  You buy Business Intelligence products, you buy products to do netflow, you buy separate products to each do analytics on a certain application, and you buy logging products to do logging such as Graylog.  Many thanks to the devops revolution that is occurring and the widespread adoption of cloud services, a different subset of IT is realizing that logs are more then just logs, they contain exponential volumes of useful data about every facet of a business that normally would take very expensive ‘enterprise’ products in order to decode.  If those logs are then combined with purposefully collected data such as metrics and historical data, the ‘logging’ solution suddenly becomes a big data platform and that big data platform can start to replace many paid for enterprise products and give insights into the business that previously were unknown (similar to hadoop and spark, but those cater to different and deeper big data requirements).

It just happens that many people use ELK *only* for logging and not anything else and *only* logs is what Graylog is good at.  ELK does logging pretty well and developers tend to prefer it over Graylog but its major strengths lie in everything else it can do in addition to logging.

Summary
There are a few decisions to be made before you can compare and decide between Graylog and ELK for logging.

Are there big data and analytic requirements to the environment?  Is cost a factor?

If there are no big data and analytic requirements then either will work.  Graylog is purpose built to only do logs and is easier to use for log managment.

If there are requirements for analytics and cost is a factor, ELK can fulfill the role for logging plus a good amount of the analytics and can be done pretty cost effectively.  Commercial products to do the analytics and big data are easier to use but if cost is a factor then the choice is likely find a free product (ELK) or do nothing at all.

 

Leave a Reply

Close Menu