Hiring Drupal developers is difficult. Hiring great Drupal developers in the current market often feels close to impossible. They are highly sought after and most of the people on the market, in all honesty, aren’t very good.
I’ve put together a list of the best Drupal interview questions that I’ve used over the years to screen Drupal candidates. Hopefully you’ll find them useful.
both the 5.x and 6.x versions are now available for download on github. sorry, i just can't do CVS anymore. to download:
- start by going here: http://github.com/cailinanne/log4drupal
- then click the
all tagsdrop-down and choose the appropriate version
- then click the download button
a full description of the module is available here
drupal 6 included an upgrade to the built in logging functionality (watchdog). drupal 6 exposes a new hook,
hook_watchdog which modules may implement to log Drupal events to custom destinations. it also includes two implementations, the
dblog module which logs to the watchdog table, and the
syslog module which logs to syslog.
with these upgrades, log4drupal is less critical addition to a drupal install, and i hesitated before providing a drupal 6 upgrade. however, eventually i decided that log4drupal is still a useful addition to a drupal development environment as log4drupal provides the following features still not provided by the upgraded drupal 6 watchdog implementation :
- a java-style stacktrace including file and line numbers, showing the path of execution
- automatic recursive printing of all variables passed to the log methods
- ability to change the logging level on the fly
in addition, the drupal 6 version of log4drupal includes the following upgrades from the drupal 5 version
- all messages sent to the watchdog method are also output via log4drupal
- severity levels have been expanded to confirm to RFC 3164
- log module now loaded during the drupal bootstrap phase so that messages may be added within
you may download the drupal 6 version here. see below for general information on what this module is about and how it works.
installing drupal is pretty easy, but it's even easier if you have a step by step guide. i've written one that will produce a basic working configuration with drupal6 on debian lenny with php5, mysql5 and apache2.
all commands that follow assume that you are the root user.
let's get started!
traffic to a website can be divided into four major sources : direct, paid, organic and referrals. unsurprisingly, google analytics segments the traffic sources reports accordingly.
there is, however, a small catch. the ever growing popularity of search engines has led to an odd use case : users who use a search engine to search for exactly your domain name, instead of simply typing www.mydomain.com into their web browser. these users have just reached your site via an "organic search" and google analytics will classify them accordingly.
technically this is correct, but semantically it's troubling. the users who have reached your site by typing "mydomain" into Google have far more in common with the users that entered www.mydomain.com into their URL bar and far less in common with those users that reached your site by typing "my optimized search term" into Google. and the population of these users is not small - on one of the commercial drupal sites that i maintain these "mydomain" Google searchers account for over one third of the supposedly organic traffic.
before the release of google analytics advanced segments, one could estimate the volume of "True Organic" pageviews by starting with the organic search volume, then using the keyword report to subtract all the "mydomain" keywords (mydomain, mydomain.com, and, my personal favorite www.mydomain.com).
thankfully, advanced segments now gives us an easy way to create a "True Direct" and "True Organic" segment - in which all the "mydomain" organic searches have been removed from the organic segment, and stuck in the direct segment instead.
amazon have just released ebs, the final piece of technology that makes their ec2 platform really viable for running lamp stacks stuck as drupal.
ebs, the "elastic block store", provides sophisticated storage for your database instance, with features including:
- high io throughput
- data replication
- large storage capacity
- hot backups using snapshots
- instance type portability e.g. quickly swapping your database hardware for a bigger machine.
amazon are quickly rectifying these problems, and recently announced elasic ip addresses; a "static" ip address that you own and can dynamically point at any of your instances.
today amazon indicated that persistent storage will soon be available.
three weeks ago, zicasso.com launched a drupal-powered free personalized online travel service that aims to connect travelers to a global network of quality, pre-screened travel companies. unlike many internet travel sites which provide cheap fares or packages, zicasso is targeted for busy, discerning travelers who want to plan and book complex trips (the ones with multiple destination stops or activities).
zicasso chose to build their application using the open-source cms system, drupal to leverage the wide array of web2.0 functionality provided by the open source community.
the application was rapidly constructed by a small development team led by cailin nelson and jenny dickinson. the team took advantage of "core" drupal modules including cck, panels, views, imagecache, workflow and actions.
in this article we get a sense of lamp performance on ec2 by running a series of benchmarks on the drupal cms system. these benchmarks establish read throughput numbers for logged-in and logged-out users, for each of amazon's hardware classes.
we also look at op-code caching, and gauge it's performance benefit in cpu-bound lamp deployments.
the previous article showed you how to set up jmeter and create a basic test. to produce a more realistic test you should simulate "real world" use of your site. this typically involves simulating logged-in and logged-out users browsing and creating content. jmeter has some great functionality to help you do this.
when making scalability modifications to your system, it's important to quantify their effect, since some changes may have no effect or even decrease your scalability. the value of advertised scalability techniques often depends greatly on your particular application and network infrastructure, sometimes creating additional complexity with little benefit.
apache jmeter is a great tool to simulate load on your system and measure performance under that load. in this article, i demonstrate how to setup a testing environment, create a simple test and evaluate the results.
one of the biggest problems is the lack of constants. how many times have you wanted to code something like this?
light_grey = #CCC. instead you are forced to repeat
#CCC in your css. this quickly creates difficult-to-maintain and difficult-to-read code.
clearly guardians shouldn't be used as a crutch for a badly configured system. used appropriately, however, they can decrease downtime due to unexpected events or administrator-error.
in this article, i describe how to implement, install and configure a guardian using a lightweight bash script. i go on to describe how to watch over your lamp install using this guardian. please note that all code and configurations have been tested on debian etch but should be useful for other *nix flavors with subtle modifications.
the blessing and curse of cck is the ability to quickly create very complex node types within drupal. it doesn't take very long before the input form for a complex node type has become unmanageably long, requiring your user to do a lot of scrolling to get to the bottom of the form. the obvious solution is to break your form into multiple pages, but there is no easy way to do this. there do exist two proposed solutions to this, the cck wizard module and a drupal handbook entry. however, the well-intentioned cck wizard module doesn't seem to work, and the example code in the drupal handbook becomes tedious to repeat for each content type. to fill the void, i bring you cck witch
cck witch is based on the same premise as the handbook entry : the most natural way to divide a cck form into pages is to use field groups. from there, however, cck witch diverges, taking a relatively lazy, yet effective approach to the problem of multi page forms: on every page we render the entire form, but then simply hide the fields and errors that do not belong to the current step. it also offers an additional feature : when the form is complete and the node is rendered, an individual edit link is provided for each step - allowing the user to update the information only for a particular page in the form, without having to step through the entire wizard again.
if you've now read enough to be curious to see the goods, then please, be my guest and skip straight to the live demo.
using the term "content management system" to describe the drupal cms understates it's full potential. i prefer to consider drupal a web-application development-system, particularly suitable for content-heavy projects.
what are the fantastic four?
drupal's application development potential is provided in large-part by a set of "core" modules that dovetail to provide an application platform that other modules and applications build on. these modules have become a de-facto standard: drupal's fantastic four. our superheros are cck, views, panels and cck field types and widgets. if you are considering using drupal to build a website of any sophistication, you can't overlook these.
if you've setup a clustered drupal deployment (see scaling drupal step three - using heartbeat to implement a redundant load balancer), a good next-step, is to scale your database tier.
in this article i discuss scaling the database tier up and out. i compare database optimization and different database clustering techniques. i go on to explore the idea of database segmentation as a possibility for moderate drupal scaling. as usual, my examples are for apache2, mysql5 and drupal5 on debian etch. see the scalability overview for related articles.
UPDATE: for the drupal 6 version, please go here.
if your career as a developer has included a stay in the j2ee world, then when you arrived at drupal one of your initial questions was "where's the log file?". eventually, someone told you about the watchdog table. you decided to try that for about five minutes, and then were reduced to using a combination of
print_r to scrawl debug data across your web browser.
when you tired of that, you learned a little php, did a little web research and discovered the PEAR log package and
debug_backtrace(). the former is comfortably reminiscent of good old log4j and the latter finally gave you the stacktrace you'd been yearning for. still, separately, neither gave you quite what you were looking for : a log file in which every entry includes the filename and line number from which the log message originated. put them together though, and you've got log4drupal
log4drupal is a simple api that writes messages to a log file. each message is tagged with a particular log priority level (debug, info, warn, error or emergency) and you may also set the overall log threshold for your system. only messages with a priority level above your system threshold are actually printed to your log file. the system threshold may changed at any time, using the log4drupal administrative interface. you may also specify whether or not a full stack trace is included with every message. by default, a stack trace is included for messages with a priority of error and above. the administrative options are illustrated below :
i got some good feedback on my dedicated data server step towards scaling. kris buytaert in his everything is a freaking dns problem blog points out that nfs creates an unnecessary choke point. he may very well have a point.
having said that, i have run the suggested configuration in a multi-web-server, high-traffic production setting for 6 months without a glitch, and feedback on his blog gives example of other large sites doing the same thing. for even larger configurations, or if you just prefer, you might consider another method of synchronizing files between your web servers.
install.php, you were right. your bum was hanging squarely out of the window, and you should probably consider beefing up your security.
drupal's default exposure of files like
cron.php present inherent security risks, for both denial-of-service and intrusion. combine this with critical administrative functionality available to the world, protected only by user defined passwords, broadcast over the internet in clear-text, and you've got potential for some real problems.
don't get me wrong, i'm a happy customer of the drupal hovertip module. everything worked out of the box, and i've enjoyed using it to cram even more pictures into my website. however, the included default css leaves a little to be desired for the following reasons :
- it's too specific. it assigns a very particular look and feel to your tooltips, complete with background colors, fixed widths and font sizes. sure, in theory, you can override all that in your theme css. but if css specificity is not your thing, you're going to be tearing your hair out trying to figure how to do it.
- the ui element chosen to indicate "hover here" is non-standard. the "hover here" directive is admittedly fairly new, but the emerging standard seems to be the dashed-underline (certainly not the italic font used in the drupal hovertip module).
- the clicktip css does not work on ie6. the link to close the clicktip has mysteriously gone missing.
you can download a more generic, flexible version of the necessary hovertip module css that solves all these issues here. here are some examples of how to use it.
when the times comes for scalability. moving of of the garageif you are lucky, eventually the time comes when you need to service more users than your system can handle. your initial steps should clearly focus on getting the most out of the built-in drupal optimization functionality, considering drupal performance modules, optimizing your php (including considering op-code caching) and working on database performance. John VanDyk and Matt Westgate have an excellent chapter on this subject in their new book, "pro drupal development"
once these steps are exhausted, inevitability you'll start looking at your hardware and network deployment.
out of the box, the views module allows you to specify access to the view according to user role. this is a critical feature, but sometimes it's not enough. for example, sometimes you may want the view access to depend on the arguments to the view.
specifically, let's suppose that we have implemented facebook-style threaded mail, and we want to use a view to display all the messages in a thread. the thread id is an argument passed to the view. we only wish to allow the view to be accessed by one of the authors of the thread, or users with the 'administer messages' permission.
here's a three step approach to resolving this dilemna :
previously, we discussed implementing all of the node hooks for CCK content types except
hook_access. unfortunately, there is no
access op for
hook_nodeapi. adding this to drupal core is the topic of much discussion on drupal.org. so far a resolution to the issue has failed to be included in drupal 5 and drupal 6, and is now on deck for consideration in drupal 7.
this is a complicated issue, and the experts are debating with good cause. in the meantime though, if you need to move on, here's what you can do.
a common path followed by advanced drupal developers using cck is the following
- create a content type using cck
- create a supporting custom module to handle advanced customizations. typically, the module is given the same name as the content type
in this custom module, developers then attempt to implement standard drupal hooks like
hook_submit. much confusion then arises as to why the drupal hook is not firing for the cck content type.
the reason is the following.
hook_view only fire for the module that owns the content type. for cck content types, the module that owns the content type is
content (e.g. cck) not your supporting custom module. therefore, drupal leaves your supporting custom module totally out of the loop!
if you've setup a clustered drupal deployment (see scaling drupal step two - sticky load balancing with apache mod_proxy), a good next-step, is to cluster your load balancer.
one way to do this is to use heartbeat to provide instant failover to a redundant load balancer should your primary fail. while the method suggested below doesn't increase the loadbalancer scalability, which shouldn't be an issue for a reasonably sized deployment, it does increase your the redundancy. as usual, my examples are for apache2, mysql5 and drupal5 on debian etch. see the scalability overview for related articles.
if you've setup your drupal deployment with a separate database and web (drupal) server (see scaling drupal step one - a dedicated data server), a good next step, is to cluster your web servers. drupal generates a considerable load on the web server and can quickly become resource constrained there. having multiple web servers also increases the the redundancy of your deployment. as usual, my examples are for apache2, mysql5 and drupal5 on debian etch. see the scalability overview for related articles.
if you've already installed drupal on a single node (see easy-peasy-lemon-squeezy drupal installation on linux), a good first step to scaling a drupal install is to create a dedicated data server. by dedicated data server i mean a server that hosts both the database and a fileshare for node attachments etc. this splits the database server load from the web server, and lays the groundwork for a clustered web server deployment. here's how you can do it. as usual, my examples are for apache2, mysql5 and drupal5 on debian etch. see the scalability overview for related articles.
installing drupal is pretty easy, but it's even easier if you have a step by step guide. i've written one that will produce a basic working configuration with drupal5 on debian etch with php5, mysql5 and apache2. it might be a help on other configurations too. see the scalability overview for related articles.