04/01/17 cakephp , elk-stack , logging , php # , , , , , ,

CakePHP 3 out-of-the-box logging to Logstash and Elasticsearch using Syslog

CakePHP 3 out-of-the-box logging to Logstash and Elasticsearch using Syslog

Logging, like testing is never a developers first priority… until you wasted enough time debugging errors in production (or any other environment) re-actively; User reports an error or malfunctioning, in our case CakePHP 3, application, ops guys getting asked for webserver and application logs, mailing you the logs, you looking at the logs and trying to reproduce it. Let’s be honest, that sucks and is no way of (developer) life. It doesn’t work when you have one application you maintain and it makes you professional (and personal) life a living hell if you have dozens of applications or application instances (e.g. containers).

So lets fix it. In this post I will walk you through my experiences using Syslog to ship CakePHP application logs to Elasticsearch via Logstash.

All code examples are located here.

Meet the team

The components used in this post.

The components used in this post.

First lets define the components we use to make it work. For the sake of simplicity I won’t go into operating systems, webservers, clusters, storage cetera, just the bare essentials. Examples and file locations will be Ubuntu based and for those wondering, this example was created on two Ubuntu 16.04 VM’s though it can work on or inside containers or other distributions just as easy.

As you can see in the figure on the side we have, in this example, two servers. Server X containing the CakePHP webapplication and syslog (default on e.g. Ubuntu) and a server Y containing Logstash and Elasticsearch.

Caveat: I am not covering the K in the famous ELK-stack, Kibana. Just to keep this post as small and focused as I can.


Configuring CakePHP 3

The first component to configure is CakePHP. The CakePHP cookbook is pretty straight-forward about using Syslog for CakePHP logging instead of the default FileLog engine.

So first apply the instructions from the cookbook with some extra configuration keys, add this block to your  config/bootstrap.php :

Below the keys that aren’t super obvious are explained:

'flag' => LOG_ODELAY | LOG_PERROR means PHP should delay opening the connection tot syslog until the first message is logged AND it should print log message also to standard error (nice when using containers for example). Take a look at the PHP docs for openlog.

'facility' => LOG_LOCAL7  uses local7 as a dedicated syslog facilty for this app so it doesn’t get mixed up with other (server) log messages.

This simple addition to your bootstrap.php makes all CakePHP logs be pushed tot syslog.

Tip: When having trouble configuring the steps above, just try to implement the example as stated in the cookbook (no flag, no facility) and you should see the CakePHP log messages among all other logs in  /var/log/syslog

Setting up syslog

Next up: configure syslog to ship the logmessages to Logstash && write them /var/log/my_app.log. To do so, add a syslog configuration to  /etc/rsyslog.d/ called  my_app.conf containing the following (adjust to your situation):

These two lines simply state: Send all incoming messages on local7 to host on port 5140 over TCP (@@, UDP is one @)

Restart syslog after you added the file ( $ sudo service rsyslog restart )

Your /var/log/my_app.log  file will now be filled with log messages looking like this:


Making Logstash ready to receive

Now we are going to configure Logstash to listen for syslog formatted messages on port 5140 and send them to Elasticsearch.

Extracting data from the messages will follow in the next chapter.

Ok, so lets create a new Logstash config, assuming a default install, create a file called my_app.conf  in  /etc/logstash/conf.d/ and add this content:

Lets talk about the three sections in this Logstash configuration file, first the input section; This defines the protocol (tcp) and port (5140) to listen on as you can see in the comments. The next phase of the Logstash pipeline is the filter, here we can manipulate the incoming event in many different ways, we will discuss this in the next chapter as it is a subject on its own. Finally there is the output section, here we can define multiple output plugins, in our case we use the elasticsearch plugin to send data to ES (and a default stdout debug plugin).

Grok it like it’s hot

So, now we receive every line of syslog as an event in Logstash and Logstash will send this line to elasticsearch. *Did he say every line?*. Yes I did, and yes that sucks, since (sys)logging is a single line event and CakePHP logs error and context like stack traces etc the log message as a single entity is logged as individual lines to syslog and thus every line as a single event to Logstash en Elasticsearch. In our example log (shown earlier) we would have 14 different total meaningless events in Elasticsearch since no line on its own means anything (ok.. except for the first line.. BUT I WANT IT ALL!).

Now I am not the only one with that issue so Logstash has just what we need: the multiline codec, codecs are ‘translators’ or ‘encoders / decoders’ we can use in the input section, the multiline codec will collapse multiline messages and merge them into a single event. The original goal of this codec was to allow joining of multiline messages from files into a single event and thats what er are going to use it for:

You can see we define a codec in the input section called multiline and configure it with three keys:

  1. pattern => "\[%{GREEDYDATA}\]" The pattern should match what you believe to be an indicator that the field is part of a multi-line event, in our case the  [Cake\Network\Exception\InternalErrorException] in our log message is the type of exception, present in every CakePHP exception message. We match it by using the  %{GREEDYDATA} grok pattern between [ and ].
  2. negate => "true" The negate can be true or false (defaults to false). If true, a message not matching the pattern will constitute a match of the multiline filter and the what will be applied. (vice-versa is also true)
  3. what => "previous" The what must be previous or next and indicates the relation to the multi-line event.

This simple codec will thus make sure the different syslog messages will again be merged into one string / event.

Now that we have our original message again (with a lot of extra syslog date, facility, etc. prefixing) we want to extract all the different interesting pieces of information we want to aggregate in elasticsearch. How we are going to do that?


Yes, grok. If I had to define grok without reading any descriptions of it I would define it as easy-to-use regex that non-developers can understand and apply <insert ops joke here>. And basically thats what is is, predefined (regex) statements, for example %{LOGLEVEL}  that matches the severity levels of log messages.

And you have to say,  %{LOGLEVEL} is a lot more understandable than the corresponding regex:  ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)

Expend the filter with the following code to match the usefull information:

Its important to understand the grok match statement makes you define the order of in which the message is formatted, using  %{GREEDYDATA} as a wildcard for crap you dont need. Examine the match statement above in combination with the available grok patterns and the example log message and it should make sense.

In the example above the  %{LOGLEVEL:loglevel} syntax means: extract the log level and add it to an event property called ‘loglevel’.

You can play with grok on several websites, creating the match above I used

Cleaning up

The final part of our journey is cleaning up the crap, and by that I mean stripping the  Jan 4 15:50:01 ubuntu MY_APP: error: from the stacktrace capture (final part of our grok->match).

For this we use the mutate filter to mostly strip of as many garbage as we can so we get a nice and clean stacktrace. For this we add the final part to our filter:


With this final part the complete Logstash config wil look like this:

Restart Logstash to activate this config, generate errors and check Elasticsearch for the result.

Final words

First: There are many ways to get your (Cake)PHP logs to something like Logstash, this is only one of them. There are some reasons I picked this for our applications:

  • I wanted to keep my application independent of Logstash so using Syslog as middle man makes it in dependable of Logstash and its service availability/
  • I want both syslog / logstash logging as direct logging to stderr. The syslog options to send the logs to both a file (OPS likes this) and Logstash makes it I can log to three different places; file, stderr and logstash.
  • Using monolog is a serious and valid option you might want to consider, for the sake of simplicity I used a default CakePHP function instead.

Second and finaly, this is only one part of the feedback loop, using Kibana and e.g. the Elasticsearch watcher (alerting) functions to visualize / dashboard and react on log messages is a logical next step.

Thank you for reading.

0 likes no responses
11/01/15 cakephp , php # , ,

Using (cake)PHP to create desktop applications

Using (cake)PHP to create desktop applications

I recently was looking for a way to ‘ship’ my PHP application to a customer for demonstrator / prototype review purposes. I first started looking for php to exe kind of packagers. There are some commercial tools that can do that but the one I really liked at first sight was phpdesktop! I also found Nightrain ( which does almost the exact same but has a huge downside (in my opinion) and that is the Qt dependency. This requires my user to download and install 500+ Mb of Qt for Windows in order to run the application. The upside of Nightrain is the cross-platform support.

Back to the problem I needed solving; Shipping my entire application in a zip file containing a simple exe to start running my app. Phpdesktop obviously was the way to go and I started out by downloading it and placing my PHP code in the www directory. The framework I use for my applications is CakePHP ( and this required some extra steps to get it working. Before I get into detail about the exact steps taken I want to skip ahead to my final solution. In my early CakePHP years I once heard someone say; Pluginize all things, and that is exactly what I did. I embedded phpdesktop into a CakePHP plugin making it possible to install that plugin (called Cakedesktop) and create an offline phpdesktop-based CakePHP application directly from your web browser.

In short, any CakePHP user can download their web application to a desktop application. The steps the plugin currently takes are the following:

0. Retrieve options from initial webpage (users can specify things like full screen, application name, etc.)

1. Copy phpdesktop skeleton dir to tmp dir
2. Copy entire CakePHP directory to tmp dir’s www dir
3. Apply options for phpdesktop (as given in step 0)
4. Copy loaded php extensions to new php.ini (check loaded extensions and copy them tot be loaded in phpdesktops php.ini, to get max compatibility)
5. Remove .htaccess files (Mongoose has no mod_rewrite for pretty urls)
6. Edit core.php and bootstrap.php to disable url rewrite and remove this plugin
7. Dump MySQL database (pure php code, no mysqldump command)
8. Convert SQL to Sqlite compatible SQL (work in progress)
9. Edit database.php to activate Sqlite
10. Import database structure and data in Sqlite
11. Zip package
12. Cleanup job dir
13. Serve package

So that’s it folks, a complete PHP application as a standalone Windows desktop application in less than a minute depending on the speed of your webserver 😉

The project has been submitted and accepted to the CakePHP plugin repo (

GitHub url:

And of course my thanks to Czarek Tomczak! Keep up the good work!

(Originally posted in!topic/phpdesktop/zg5OfGwVWqs) 

0 likes no responses
13/12/14 php , webservices # , , , , , , ,

PHP SoapClient with cURL powered request

I stumbled upon a SOAP related problem at work recently; I needed to consume a SOAP web service secured by two way authentication via self signed SSL certificates (…). This service was provided by another department so there was no arguing whether to use SOAP or, my personal preference, REST.

My project is build out of two components; An AngularJS front-end webapp and a CakePHP powered REST API. The CakePHP API created one endpoint for the front-end clients and connected them to all sorts of different data-sources (MongoDB, local MySQL, remote MySQL and a SOAP service).

So, back to the problem (and solution) at hand; For SOAP I started using PHP’s native SoapClient class. This works pretty nice if you have the WSDL and XSD files for the service, it even should work with SSL certificates, private keys and passphrases… except it doesn’t… For some reason (PHP 5.3, don’t ask…) the requirement of the service to pass both a client certificate as well as a private key with passphrase doesn’t work (failed to enable crypto error).

After some searching I kept finding a workaround using cURL to make the SOAP request so I went with that and could indeed make the SOAP call. The result unfortunately was, of course, raw XML I had to parse manually to get my data. Apart from that I also had to create the SOAP envelope and request body myself by just creating a string instead of using the data param in the SoapClient way.

Ok, enough problems and inconveniences lets get to the solutions!

I extended the SoapClient class and overwrote the __doRequest method with a cURL powered method body, returning the raw XML of the result afterwards lets the SoapClient class parse it and validate it against the XSD, this way you have the best of both worlds; The power and ease of the SoapClient class and the flexibility of cURL for the actual request.

The code is posted on GitHub as a gist including an example:



0 likes 5 responses