Thursday, December 25, 2008

PostgreSQL: An ultimate strategy for full text search

Full text search is one of the most powerful features in PostgreSQL. In this blog entry, we'll start with a typical text search problem in its simplest form, and see how to implement its query under PostgreSQL. Then we'll evolve the problem bit by bit and see how we can modify our implementation accordingly, until we come up with an ultimate strategy for the text search problem in its most generic form.

In its simplest form, our text search problem would be selecting all users, whose name matches "Andy Williams":

SELECT * FROM users
WHERE to_tsvector(users.name) @@ to_tsquery('andy | williams')


That was pretty easy. Now, what if we're searching for a query in more than one column, from one or more tables. e.g. Select users whose users.name, or profiles.full_name matches "Andy Williams". In such a case we'll have to use the concatenation operator ('|') to concatenate all the columns we'll search in. Notice using 'coalesce' to pick null values, because concatenating anything with null returns null:

SELECT * FROM users LEFT JOIN profiles
ON users.id = profiles.user_id
WHERE to_tsvector(coalesce(users.name, '') | coalesce(profiles.full_name, '')) @@ to_tsquery('andy | williams')


Both of the previous queries will return some records with no specific order, which is not the case in a typical text search problem. A common requirement would be ordering the results by relevance. i.e. How relevant is a record to the given search query. PostgreSQL offers a function, ts_rank_cd, which evaluates how relevant a vector is relevant to a query.

SELECT users.*, profiles.*, ts_rank_cd(to_tsvector(coalesce(users.name, '') | coalesce(profiles.full_name, '')), to_tsquery('andy | williams')) as rank
FROM users LEFT JOIN profiles
ON users.id = profiles.user_id
WHERE to_tsvector(coalesce(users.name, '') | coalesce(profiles.full_name, '')) @@ to_tsquery('andy | williams')
ORDER BY rank DESC


Now, what if we are looking for "Andy Williams" in both users and their dependants (one to many relation), where joining will yield repeated records? In the simple case, where no relevance order is needed, we'll just eliminate repititions using DISTINCT. It's that simple because we don't care how many times a single record occured in the results, as we're not interested in its relevance:

SELECT DISTINCT * FROM users LEFT JOIN dependants
ON users.id = dependants.user_id
WHERE to_tsvector(coalesce(users.name, '') | coalesce(dependandts.name, '')) @@ to_tsquery('andy | williams')


Now let's look at the more complex, more realistic, most generic case. We need to return users records whose names or dependats' names match "Andy Williams", returning each users result only once, with the most relevant record first. Simple distinction in this case is semantically wrong because a user with two "andy williams" dependants is more relevant than a user with only one.

An approach to solve this problem is to select a new aggregated column with the concatenation of dependants' names, including it in the search, and grouping results by user. The problem about it is that there is no pre-defined string-concatenation aggregate function. PostgreSQL offers a way to define custom aggregate functions, so we can define our own concatenation function. However, we'll also face the group-by limitation of PostgreSQL. that is, all selected columns must appear in the group-by clause, which pretty much complicates the query.

This brings us to the ultimate strategy, which serves all the mentioned requirements, and eliminates the mentioned problems. The solution suggests that all searchable columns are automatically aggregated in a new system-maintained column, by a database trigger or an application-level callback. Then our task would be as simple as searching that extra column for our query. In our example, An application callback could be used to watch over changes done to user (insert, update) and dependants (insert, update, delete). The callback's task is to re-calculate the 'textsearch' column that contains the concatenation of the user name and the names of all his dependants. Then the text-search query will be as simple as follows:

SELECT users.*, ts_rank_cd(to_tsvector(users.textsearch), to_tsquery('andy | williams')) as rank
FROM users
WHERE to_tsvector(users.textsearch) @@ to_tsquery('andy | williams')
ORDER BY rank DESC

Notice that an aggregation overhead is added to the update operations, while the search operations are now releaved from any joins. This is what makes this approach preferable, because select operations are much more frequent than update operations in typical applications. This results in an overall performance boost.

Wednesday, December 17, 2008

HTTP Basic Authentication and Realms

One of the most well known features of HTTP is Basic Authentication. You most probably know how to implement a basic authentication scheme in HTTP if you spent enough time in web development. You usually add a response status code of 401 (unauthorized) and let the web browser prompt the user for authentication. Then the browser will resubmit the authentication header with every subsequent request under the same domain name.

However, using this simple scheme, one must assume that all requests under the same domain name are accessible to the same people (the concept of roles). Suppose that you want to allow user_1 to access a part of the site with his credentials, and user_2 to access another part with OTHER credentials. Using this simple scheme, it can't be done because the whole site is assumed by the browser to be one unit. A user is either authenticated to access the whole site or not at all.

The solution to this problem is using the 'WWW-Authenticate' response header and the 'realm' keyword. This keyword simply tells the client that authentication is needed for a certain realm (or part) of the website.
WWW-Authenticate: Basic realm="site"

If the browser already has an authentication header for that realm, it will resubmit it. otherwise, it won't just submit any authentication header just because it belongs to that domain name. If it doesn't have the authentication header specific to that realm, it will re-prompt the user again for authentication. i.e. If a subsequent response has a header like this:
WWW-Authenticate: Basic realm="administration"

The browser won't resubmit the authentication header of "site" realm. It will re-prompt the user for "administration" realm authentication.

MySQL Vs PostgreSQL: key differences in queries

Coming from a MySQL background, and trying PostgreSQL for the first time, I am experiencing some key differences in query syntax. Some queries that used to work fine under MySQL now produce errors under PostgreSQL. Generally, PostgreSQL's query syntax is tighter, more strict to ANSI SQL. For anyone moving from MySQL to PostgreSQL, these differences will be helpful to know. So I'll post them in groups as I go deeper in PostgreSQL with time.

The first difference I encountered is that aliasing in PostgreSQL requires an explicit 'as'. This means that the following query, that used to work under MySQL, won't work under PostgreSQL:
SELECT count(id) count FROM users;

Instead, an explicit 'as' is needed before the alias:
SELECT count(id) AS count FROM users;


Another difference is the 'group by' issue. In MySQL queries were allowed to group the results by a subset of, not necessarily all, the selected columns. for example, the following query works under MySQL:
SELECT users.name, users.id, count(telephones.id)
FROM users LEFT JOIN telephones
ON telephones.user_id = users.id
GROUP BY users.id;

Such a query doesn't work under PostgreSQL. When grouping, all selected columns (except aggregated ones) must appear in the group-by clause:
SELECT users.name, users.id, count(telephones.id)
FROM users LEFT JOIN telephones
ON telephones.user_id = users.id
GROUP BY users.id, users.name;

This limitation makes it hard to build a query to return results that are distinct based on specific columns. i.e. This way I can't use 'group by' to return distinct results based on users.id only.
However, PostgreSQL comes with a good feature that will help you return distinct results based on some of, not all, the selected columns; 'distinct on' can be used as follows:
SELECT DISTINCT ON (users.id) users.name, users.id
FROM users LEFT JOIN telephones
ON telephones.user_id = users.id;

Nevertheless, It is worth a note that distinct-on clauses can't be used with order-by clauses in case the columns list in both clauses are different. i.e. The following query won't work because the distinct-on columns list is different than the order-by colmns list:
SELECT DISTINCT ON (users.id) users.name, users.id
FROM users LEFT JOIN telephones
ON telephones.user_id = users.id
ORDER BY users.name;


Wednesday, December 3, 2008

HTTP: Forcing download

There are some content types, other than HTML, that web browsers can render, like images, XML, and pdf. When an HTTP response is received, having a content-type from those, the default behaviour of most browsers is to render it, not to download it. To make a browser understand that the response is to be downloaded, not rendered, you must be explicit about it.

Now, how do you do it? There is a standard HTTP response header that many people just don't know about. It's called 'content-disposition'. It's default value for most browsers is 'inline', which causes the browser to render instead of download. To force download, you must specify that your content-disposition' is 'attachment'. You can also add a file name to the downloaded file:
Content-disposition: attachment; filename=name.ext

This way, the browser will understand that the response body is to be downloaded instead of rendered, even if it's a known type that can be rendered.

Rails: has_many documentation fault

I was going through an implementation case when I wanted to add a has_many :through relation from a 'User' model to itself, through a 'Friendship' model. According to the documentation of has_many:
:through Specifies a Join Model through which to perform the query. Options for :class_name and :foreign_key are ignored, as the association uses the source reflection.
The relation should infer the class name and the foreign key from the source option. i.e. If the source is :post, then the class is Post and the foreign key is post_id.

The problem was that I had a special case, where a self many-to-many relation is needed. There was a User-to-User join model called Friendship. Its table has two relevant columns: user_id and friend_id. The association using :source would look like that:
has_many :friends, :through => :friendships, :source => :user

This wouldn't be right because the relation will use 'user_id' to do the join instead of 'friend_id'.

To my surprise, the documentation was not right in this case. Adding the :class_name and :foreign_key options instead of :source worked like a charm.

has_many :friends, :through => :friendships, :foreign_key => 'friend_id', :class_name => 'user'

Sunday, November 30, 2008

NeverBlock at RubyConf 2008

It feels really good when the community shows appreciation to your effort. eSpace was invited to give a session about NeverBlock at RubyConf 2008. Here's the video of the session, delivered by Yasser Wahba.

Thursday, November 20, 2008

Full Text Search: PostgreSQL beats MySQL

Ever since I can remember, we've been relying on MySQL as the database backend for our open source solutions developed at eSpace. MySQL's popularity is unquestionnable, being one of the most widely used DBMSs in the open source community. However, now with my eyes wide open, I can say that I prefer PostgreSQL over MySQL. And for your surprise, what shapes up my opinion is not some performance benchmark or some detailed 'versus' report. I'd choose PostgreSQL for one single hell of a feature: Full Text Search.

For the first glance, some people could be amazed by my reason. Some people would argue that PostgreSQL is too much powerful for many more important reasons. I agree with those, but let's stick to our context. Others will start to mumble, MySQL does have Full Text Search among its set of features. Yes, of course it does, but let me highlight the difference.

In MySQL, A full text search query, that searches for the phrase "database systems" in the title and body of an 'articles' table, looks like this:
SELECT * FROM articles
WHERE MATCH (title, body)
AGAINST ('database systems');

Now, in order to enable full text search in a table column, A full text index must have been created for this column. That shouldn't be a problem. The problem begins when you know that, In MySQL, a full text index can only be created on tables that use MyISAM storage engine. The problem with MyISAM tables is that they are not transactional, meaning that you can't perform COMMITs and ROLLBACKs on such tables. And that's why most decent MySQL applications rely on InnoDB/BDB as a storage engine, because they are transcational.

See? You simply have to lose one of the most important features given by any DBMS, and jeopardize your data consistency, just to enable text search. Of course, there are some hacks to work around this conflict, including some forms of replication, but it's still just not good.

On contrary, in PostgreSQL, Full Text Search is such a relief. The same full text search query looks like this in PostgreSQL:
SELECT * FROM articles
WHERE to_tsvector('english', title || body)
@@ to_tsquery('database | systems');

This query simply uses 'english' language configuration to search the concatenation of title and body for 'database' or 'systems'. PostgreSQL supports a single storage engine, so you don't change anything. Furthermore, this query will run nice and easy even if you didn't create text indices for the searchable columns. As you can expect, creating the text indices will massively speed up the search for large data volumes, but it's not mandatory. Full text search is enabled by default and you don't have to give up on being transactional to use it. For me, it's the first time that one single feature makes all that difference, changing my preference between two technology alternatives.

Monday, September 29, 2008

Upload Manager: A Radiant CMS extension for batch uploads

Starting to work with Radiant, I found out that it is a very powerful CMS. However, it lacks an important administrative feature: file upload. A common requirement for any site admin is to be able to upload files and link for them in the site pages. So, starting to add this feature to Radiant, I thought I should extract it as a generic extension for uploading multiple files easily.

Radiant Upload Manager is an extension for Radiant CMS that enables the admin to upload multiple files at once in a handy way. It is built on the SWFUpload Flash library, enabling selecting multiple files at a time for upload, and preserving the same HTML/CSS interface as the rest of the admin layout.

The idea of the uploader is pretty simple. A hidden Flash object is wrapped in a JavaScript wrapper. The wrapper initializes the flash open file dialog, which has the feature of multiple file selection. When the user chooses the files to upload, the flash object starts uploading them, giving the wrapper a set of useful callbacks (uploadStart, uploadProgress, ..etc) to update the HTML view.

You can download the extension at its GitHub page. For installation and usage, refer to the README



Thursday, September 25, 2008

Extending FCKeditor Radiant extension

FCKeditor is one of the most powerful rich text editors out there. However, FCKeditor extension for Radiant lacks a very important feature; that is enabling the site admin to change the editor's interface language. That feature was a requirement in a project I was working on, so I decided to dive into the extension's code to see what's going on in there, trying to figure out a way to hack that feature in.

Initially, what I had already known is that FCKeditor Radiant extension allows changing the default language, text direction, and other configuration through editing /vendor/extensions/fckeditor/app/views/fckeditor/config.js.erb. The following line was of my concern:

FCKConfig.DefaultLanguage = 'en' ;

This is fine for default configuration, but how could I change these configurations at runtime? Taking a look at the resulting html of the editor, I noticed that the editor is loaded in an iframe element with id 'part_0_content___Frame'. What jumped first to mind is that I can figure out a way to reload the iframe, passing an extra language parameter to the page it loads. Then, receiving this parameter by javascript, I could change the configuration that I want before the iframe reloads.

So, the solution was to extend the extension. I needed to add a button to the page-editing page to fire the switch-language functionality. Editing the extension file /vendor/extensions/fckeditor/fckeditor_extension.rb, I added this line to the extension's activate method:

admin.page.edit.add :part_controls, "language_btn"

This line adds the partial /vendor/extensions/fckeditor/app/views/admin/page/_language_btn.html.erb to the part-controls region og the page-editing page. Then adding that partial, adding to it a button whose onclick implementation reloads the iframe with extra parameter. It could roughly look like this:

frame = $('part_0_content___Frame');
frame.src = frame.src + "&language=ar"

The remaining job is to intercept that extra parameter; editing config.js.erb:

//getting params from the url
params = window.location.href.split('?')[1].split('&');

//checking each param
for (var i=0; i<params.length; i++){
pair = params[i].split('=');
name = pair[0];
value = pair[1];
if (name == 'language'){
FCKConfig.DefaultLanguage = value ;
}
}


So when the button we added gets clicked, the iframe will reload. This forces reloading of its internal script tags, among which is our code snippet added to config.js.erb. This snippet checks for the language parameter and acts accordingly. When the editor is reloaded, it reloads with the new default language.



Tuesday, September 23, 2008

Rails plugins: Metaprogramming vs Generators

One of the most powerful features of Rails is plugins. Plugins enable developers to make generic extensions to Rails applications that others can benifit from. One could use two different approaches to add new logic/aspects to a Rails application through a plugin: metaprogramming and generators. This entry is not a tutorial about writing Rails plugins. Assuming that you know the basics of metaprogramming in Ruby, and using a very basic example, we'll try to conclude a simple comparison between the two approaches that can be generalized.

Assume that your plugin relies on adding some logic as a before_filter to all controllers. You have that logic in a module that you want to mix in ApplicationController. Also you want to declare that filter. Using the first approach, Ruby metaprogramming, you can apply virtually any changes to the existing classes, modules and even objects. The following two lines can help in our example:
ApplicationController.class_eval "include MyModule"
ApplicationController.class_eval "before_filter :my_filter"

Basically, what we have just done is that we told the interpreter to dynamically evaluate those two lines in the context of the ApplicationController. Thos two lines are NOT lexically added to the definition of the class. It's like they're added at runtime.

Another approach we could use is a Rails generator. A generator, in a nutshell, lexically adds generated code to the existing files. We could do the same job in the example using a generator as follows:


class MyGenerator < Rails::Generator::Base

def manifest
record do |m|
m.gsub_file 'app/controllers/application.rb', /(#{Regexp.escape("class ApplicationController < ActionController::Base")})/mi do |match|
"#{match}\n include MyModule\n before_filter :my_filter\n"
end
end
end

end

Basically, what we have just done is that we searched for the line class ApplicationController < ActionController::Base and LEXICALLY added the two lines next to it. Of course, this generator has to be run to apply its changes.
ruby script/generate my_generator


The tradeoff between generation and metaprogramming is simply the tradeoff between being mixed but explicit and being isolated but subtle. Generation may result in mixing some logic, but has a major advantage of being explicit. The resulting code is explicitly added to the project files, and can even be modified by the developer using the plugin. Metaprogramming forces separation of concerns, but is done in a subtle way that could lead in a time waste for the developer investigating what happened behind the scenes.

Of course, there are situations where only generation could be used, like adding migrations, routes and other stuff. Take a look at the features Rails generators give.

Rails/Generator/Commands/Base
Rails/Generator/Commands/Create
Rails/Generator/Commands/Destroy



Wednesday, September 17, 2008

Rails: RESTful namespaces shall set you free

Being one of the its admirers, I try to conform to REST almost all the time, thanks to Rails support. One of the problems in following the REST model is the problem of resource naming. In its simplest forms, REST is about mapping each resource to a unique name with a set of conventional urls. The problem appears in a case when you need more than one representation to some/all of your application resources (in the same format), or in other words, if some resources need to have colliding names.

A famous example of such a case is application with administration modules. Regularly, most of the resources are needed to be duplicated in another representation for the admin. For example, you already have a resource named 'Item' and you need another one with the same name and different representation for the admin.

Rails solves this problem with a handy feature: RESTful namespaces. RESTful routes could be managed in different namespaces to avoid names collision. What's elegant about it is that when you nest a resource in a namespace, it's mapped directly to an expected controller class that's a member in a module whose name is the same as the namespace. Also, the namespace contributes to the url helpers just as nested resources.

For example, while the configuration:
map.resources :items

maps to ItemsController and generates a set of helpers/routes like:

items_path() => /items
item_path(id) => /items/id
new_item_path() => /items/new
edit_item_path(id) => /items/id/edit

, The configuration:

map.namespace :admin do |admin|
admin.resources :items
end

maps to Admin::ItemsController and generates a set of helpers/routes like:

admin_items_path() => /admin/items
admin_item_path(id) => /admin/items/id
new_admin_item_path() => /admin/items/new
edit_admin_item_path(id) => /admin/items/id/edit


So, essentially what happens is a total collision removal(resource names, controller names, helpers, urls) with a minimal effort. I gotta admit, I love Ruby in Rails.


Monday, September 15, 2008

HTTP authentication and Rails

There are situations where a Rails application is almost complete, only lacking the users-authentication aspect. In such situations, sometimes elegant/complex authentication schemes are not really needed. For example, consider a simple administration panel that is developed for managing another application. In such a case, HTTP basic authentication could be very handy.

HTTP basic authentication doesn't rely on cookies like the casual session scheme. It simply depends on the fact that the client will send all its requests accompanied by an authorization header that represents the username and password. Web browsers implement HTTP basic authentication in a comfortable way. When a user requests a page that requires HTTP authentication, the response status is 401 unauthorized. the browser automatically detects this status and prompts the user for a username and password. The browser then repeats the request after adding the authorization header, and remembers adding it to all subsequent requests to the same domain name. Hence, a session is emulated.

Rails provides a very handy way for implementing HTTP authentication as a new aspect. The method authenticate_or_request_with_http_basic extracts the username and password from the authorization header and passes to the given block as parameters. By applying a before filter as the following, a simple HTTP authentication can be added to a complete Rails application as a new isolated aspect:


def authenticate_admin
authenticate_or_request_with_http_basic do |username, password|
admin = Administrator.authenticate(username, password)
if admin.nil?
render :text => "", :status => :unauthorized
return false
end
true
end
end


where Administrator.authenticate returns an admin from the database if username and password match any, and nil otherwise.


Wednesday, September 3, 2008

webistrano_privileges: a Rails plugin for Webistrano

Webistrano is a widely used tool for automated deployment of Rails applications. It makes Rails people's life much easier. However, one of its most outstanding flaws is the lack for user-access control. All registered users can control all projects.

webistrano_privileges is a simple Rails plugin that I developed, introducing access control to Webistrano-1.3. After applying the plugin to your working webistrano project copy, and running two shell commands, Webistrano will be accomodating a simple access control scheme. Admins can manipulate all projects. Non-admins can manipulate only THEIR projects. Admins can add and remove users to projects.

What the plugin basically does is:
- it generates a migration for a many-to-many relation between users and projects.
- it generates a route and a controller for adding and removing users from projects
- it replaces some views to present the added functionality.
- it introduces some logic to secure unauthorized access to projects from non-related users.

You can get the plugin from its github page. After downloading, only two steps are required:

- run the generate command, accept whenever prompted for overwriting existing files:
ruby script/generate privileges_extensions

- migrate
rake db:migrate RAILS_ENV=production



Monday, September 1, 2008

Rails: simple_localization hates observers

One of the most helpful and simple-to-use Rails plugins is simple_localization. It makes the job of localizing a Rails application a lot easy and straight forward. It implements localization through a set of features. One of the most commonly used features is localized ActiveRecord error messages. Its use is for localizing the standard error messages generated upon ActiveRecord validations ("can't be blank", "is too short", ..etc).

Working on a Rails project, after localizing the whole application, everything was working just fine in the development environment. However, when trying it in production environment, the locaized-error-messages feature didn't seem to work properly. Error messages for some (not all) of the models were not translated. For my surprise, Those models were exactly those being observed by declared observers.

Now, what seems to be the problem? The problem is that the implementation of localized-error-messages feature is just as simple as follows; it just overrides ActiveRecord::Errors.default_error_messages with the translated version. (simple_localization/lib/features/localized_error_messages.rb - line 26):

ActiveRecord::Errors.default_error_messages = ArkanisDevelopment::SimpleLocalization::LangSectionProxy.new :sections => [:active_record_messages],
:orginal_receiver => ActiveRecord::Errors.default_error_messages do |localized, orginal|
orginal.merge localized.symbolize_keys
end


The problem is that, during Rails initialization, observers declared in the standard way in the environment's configuration (and their associated models) are loaded before plugins. (config/environment.rb):
Rails::Initializer.run do |config|
config.active_record.observers = :user_observer
end

Using this line, we are not starting the observers; we are just telling Rails what observers to load when initializing. We don't control its loading order with respect to the loading plugins. By default, observers load first, loading their models accordingly. Now those models are completely loaded, including their default_error_messages way before the plugin is allowed to override them. This problem occures only in production environment, becuse in development environment, models are reloaded with each request, inheriting the overridden default_error_messages.

To work around this problem, we need to deliberately postpone loading the observers until initialization is done. This can be done by explicitly setting the observers in the after_initialize block instead of the above standard scheme:
Rails::Initializer.run do |config|
config.after_initialize do do
ActiveRecord::Base.observers = :user_observer
end
end

Like this, observers will load after initialization is finished, and all models will inherit the overridden default_error_messages.


Tuesday, June 24, 2008

Rails: Exception handling has never been easier

Consider the typical case, where you've just finished writing your Rails application. You've considered all runtime errors that might appear in some specific known scenarios. You handled them all successfully, and you're about to deploy your application. The problem is, despite handling all "expected" thrown exceptions, there are still some typical exceptions that still might appear due to user input, rather than a bug in your logic.

A famous example of those exceptions is ActiveRecord::RecordNotFound. This exception is raised when calling 'find' on a model class using a non-existing id. In many cases, the model record id could be supplied as a parameter from the user. This means that such an exception, if not handled, could always be thrown, displaying an ugly stack trace to the user. Another example is ActionController::RoutingError. If a user tried a url that doesn't correspond to any application route, An ugly page will appear with a message "No route matching....". Of course, our concern also includes application bugs that might generate some exceptions. We want them to be logged and handled gracefully until we deal with them.

As expected, Rails provides an awesome generic mechanism for handling all unhandled exceptions. What's beautiful about it is that it can be added as a new aspect to the application, after we're done coding. The magical key is 'rescue_from', one of ActionController's class methods. Using rescue_from in ApplicationController, we declare a last line of defense for all unhandled exceptions in deeper levels. The following code snippet is a typical example of handling all unhandled exceptions using rescue_from:
rescue_from Exception, :with => :rescue_all_exceptions if RAILS_ENV == 'production'

def rescue_all_exceptions(exception)
case exception
when ActiveRecord::RecordNotFound
render :text => "The requested resource was not found", :status => :not_found
when ActionController::RoutingError, ActionController::UnknownController, ActionController::UnknownAction
render :text => "Invalid request", :status => :not_found
else
EXCEPTION_LOGGER.error( "\nWhile processing a #{request.method} request on #{request.path}\n
parameters: #{request.parameters.inpect}\n
#{exception.message}\n#{exception.clean_backtrace.join( "\n" )}\n\n" )
render :text => "An internal error occurred. Sorry for inconvenience", :status => :internal_server_error
end
end


Now, what exactly does this code chunk do? It's quite simple. We just declare that we want to handle exceptions of type 'Exception' (parent of all exceptions) using the method 'rescue_all_exceptions'. We add a condition to do this only if the application is running in production environment (of course we need the exception stacktrace in development rather than a clean error apology). Then we define the implementation of our exception handler 'rescue_all_exceptions'. Basically, we act according to the type of the passed exception. If it's RecordNotFound, we display a clean "not found" message. If it's a routing problem, we display a clean "invalid request" message. If it's neither of the mentioned, we assume that the exception is generated due to a bug. We log the details of the request and the exception, and display a clean apology for an internal server error.

What have we just done? We have transferred our application from an unsafe state where any runtime error could generate an ugly user-irrelevant page, to a safe state where any runtime error generates an appropriate message, logging its details for future analysis and fixing.


Sunday, June 22, 2008

RESTful Rails: param_parsers and XML issue

Rails, as a framework, and REST, as an architecture, have always been bound since the first appearance of Rails. Rails seamlessly promotes REST through a set of awesome features, including RESTful routing, resource url helpers, respond_to blocks, and others.

One of the most handy features that promotes the use of REST in Rails is ActionController::Base.param_parsers. param_parsers is a hash that maps standard (or even user-defined) MIME types to a parser procedure. The use of such procedures appears when the 'content-type' header of the request is set to one of the known MIME type names. In such a case, the parser is automatically invoked to parse the request body and evaluate the famous 'params' hash from it.

For example, "text/xml" and "application/xml" content-types both map to 'XML' MIME type. If a request is issued with such a content-type header, the procedure ActionController::Base.param_parsers[Mime::XML] is automatically invoked, evaluating params from the XML request body. This is really magnificent when you're writing a uniform RESTful application that's wanted to be able to comprehend, and respond with, multiple formats. You just write your action logic depending on the presence of the 'params' hash, letting the param_parsers to handle the translation.

However, there is a fundamental problem when talking XML. In a typical case, one of your controller actions could be waiting for more than one root param. The fundamental problem with XML is that it could only have a single root element. This means that, when posting XML, you can only post one root parameter. For example, the action logic could be operating on two parameters: params[:name] and params [:email]. With no single parent to both, XML form of this params hash can never be formed. And, of course, we don't want to change our application logic to overcome this problem. we need to attack it in its heart, the parsing part.

The good news is, it's hackable !! Rails lets you define a custom MIME type and its param_parser, or even override an existing MIME type param_parser. All we need to do is to define a conventional wrapper to be used in such a case. For example, we could announce that, in all cases of sending multiple parameters in XML, a single parent called 'hash' (or anything else) should wrap all of them. Then we override the XML param_parser to extract the wrapped tags to params.
ActionController::Base.param_parsers[Mime::XML] = Proc.new do |data|
source = Hash.from_xml data
result = {}
if source.keys.length == 1 and source.keys[0] == "hash"
source['api_hash'].each { |k, v| result[k.to_sym] = v }
result
else
source
end
end

All we have done is that we formed the hash from the posted XML (data), then checking for the very case of "a single root named hash", extracting its inner values to the root. Like this, our application logic needn't be touched.

Wednesday, June 18, 2008

Rails: Dynamic Test Fixtures

One of the clearest virtues of the Rails framework is the powerful testing support. Each generated model is created along with its unit test and test fixture. Likewise, each generated controller is created along with its functional test. Among others, test fixtures have proven to be the most handy testing feature introduced by Rails.

Rails test fixtures are named preset database records, written in YAML, that allow you to define the state of the database before each test case is run. Also, being named, a fixture can be accessed as a model instance through its name.

Well, old news I know. What I am introducing here is an extended feature of Rails fixtures, called "dynamic fixtures". Let's describe it with an example. Assume that we have a model called 'customer' with the following fixture:
bob:
name: Bob
email: bob@looney.com
appointment_date: 2008-07-01

Now assume that we need to write some code that does some logic on customers that have appointments today.
if customer.appointment_date == Date.today
...
Obviously, Any code that tests this logic using the given form fixtures, will succeed only on 2008-07-01, and fail otherwise. What we really want is to declare a fixture whose appointment_date is always set today. Well, cheer up, dynamic features allow you to declare fixtures like the following:
<% TODAY = Date.today %>

today_customer:
name: bob
email: bob@looney.com
appointment_date: <%= TODAY %>
not_today_customer:
name: alice
email: alice@looney.com
appointment_date: <%= TODAY.advance(:days => 10) %>
Using this dynamic fixture, tests will always run successfully without any need to change the fixtures every time tests are run.

Another well known situation where a dynamic fixture is needed is when we need to save in the database values that are computed in runtime. A typical example is when we save a hashed version of users passwords instead of plain ones. we compute the hashed password in runtime. The fixture can look like that:
Bob:
username: bob
password: <%= User.hash('1234') %>
Like this, we store a valid hashed version of the plain password '0000'. Handy, huh?


Monday, June 16, 2008

Rails: Order of requiring libraries actually counts

While working on a Rails project, I got stuck in a very confusing problem. I simply wanted to define a cache sweeper. The well known definition of any cache sweeper is
class MySweeper < ActionController::Caching::Sweeper
end

To my surprise, I got an error: Uninitialized constant "ActionController::Caching::Sweeper"!!!! What the heck? Sweeper class is not defined.

After a significant period of time spent snorkeling in the code of Rails ActionController, I found the line where the Sweeper class is defined (actionpack-2.0.2/lib/action_controller/caching.rb - line 627).
if defined?(ActiveRecord) and defined?(ActiveRecord::Observer)
class Sweeper < ActiveRecord::Observer #:nodoc:
...

It turned out that the Sweeper class wasn't defined in my environment, just because ActiveRecord wasn't defined by the time 'action_controller' is required. I found out that actually I was requiring 'action_controller' before 'active_record'. I switched their order, and everything worked like a charm.

So, the bottom line is, ActiveRecord turned out to be a prerequisite for definition of certain classes and modules in other libraries like ActionController and ActiveSupport, so always require 'active_record' first.


Monday, June 2, 2008

Rails: Getting the best out of ActionMailer and smtp.gmail.com

A typical Rails developer casually needs to answer an important question when it comes to using ActionMailer to send mails; shall I use a local smtp server (like exim or sendmail) or shall I use a trustful, more reliable smtp server like smtp.gmail.com ?

Let's list the pros and cons of each choice. Using a local smtp server residing on the same machine with the application reduces communication overhead and consequently improves performance. However, mails sent using such smtp servers will mostly be considered as spam by most famous mail providers, because the sender is not a trustful email account. Using smtp.gmail.com eliminates this hazard, as the mail messages are sent from a trustful account, but dramatically decreases performance due to blocking communication overhead. Also, the used gmail account could be blocked if gmail sensed multiple connections using it, which is the typical case when the application is serving multiple requests.

Now let's tweak the second choice to eliminate its cons, getting the best out of ActionMailer and smtp.gmail.com. The basic idea is to prepare the mail to be sent, until the only step left is calling ActionMailer::Base.deliver. Instead of delivering the mail at this point, we'll dump it to the database using a model for all emails in the application. Then we'll do a simple rake task that gets those mails from the database, sends them and deletes them. Following is a detailed example.

ActionMailer configuration, normally, will look like this (after installing the TLS plugin)
  config.action_mailer.smtp_settings = {
:address => 'smtp.gmail.com',
:port => 587,
:domain => "www.yourdomainname.com",
:authentication => :login,
:user_name => "account@gmail.com",
:password => "@cc0unt_p@$$w0rd",
:tls => true
}


Assume one of the mailers in the application is called 'InvitationMailer'. The typical three lines of code that send an email are:
  mail = InvitationMailer.create_invite(invitation)
mail.set_content_type("text/html")

#deliver mail
InvitationMailer.deliver mail


Instead of this, we'll add a new model, PendinMail, with two attributes: id (integer) and serialized_mail (text). We'll replace the last line to serialize the mail object and save it in the database:
  mail = InvitationMailer.create_invite(invitation)
mail.set_content_type("text/html")

#dumping the mail to DB instead of calling deliver
PendingMail.new(:serialized_mail => Marshal.dump(mail)).save


The only thing left is a rake task that runs periodically on the server to send those mails:
  desc "Send all pending emails stored in DB table pending_mails"
task :send_pending_mails do
mails = PendingMail.find :all
for mail in mails do
ActionMailer::Base.deliver(Marshal.load(mail.serialized_mail))
mail.destroy
end
end


Now, what did we gain from this scheme?
- first: We used smtp.gmail.com and avoided considering our emails as spam.
- second: we removed the blocking communication overhead with gmail's smtp server, during which the request being served is blocked.
- third: we avoided the hazard of using the same gmail account from multiple connections, because we're sure that only one process will be using this account all the time.


Sunday, May 11, 2008

Role-based views using CSS

In multi-role applications, most views are needed to contain almost the same content, but with different actions, varying according to the role of the user. This can be accomplished in most server-side views (like jsp, aspx, erb, ...etc) in a traditional way.

<!--
common content
-->
<% if role1 %>
<!--
role1 actions
-->
<% else if role2 %>
<!--
role2 actions
-->
<% else if role3 %>
<!--
role3 actions
-->

This works just fine. However, It's just so ugly!! Obviously we are embedding runtime logic in the views description. This breaks the concept of separation of concerns. Also, it makes the page fragments hard to cache because the resulting html becomes variant according to role values. The good news is that it can be accomplished in a much more elegant way. Basically, what we could do is, include all possible actions of all roles in the resulting html, relying on simple CSS style rules to show/hide those actions. We assign actions to CSS classes, add simple style rules to show the right actions in the right contexts. As as example, assume possible roles are admin, user, guest. assume the role of current user is known in variable 'role'. we assign the 'body' element an id that varies according to the current user role

< body id= <%=role%> >

Then we add the common content

<!--
common content
-->

Then we add all the possible actions, assigning each action CSS class names that represent the roles where the action should be displayed in.

< a href='.....' class="admin" >
< a href='.....' class="admin user" >
< a href='.....' class="user guest" >
< a href='.....' class="guest" >

The default definition of all those CSS classes are:

.admin, .user, .guest{ display: none }
so they are all hidden by default.
What's left is adding CSS style rules to the view 'head' element to show the right actions in the right context.

< style >
body#admin .admin { display: block }
body#user .user { display: block }
body#guest .guest { display: block }
</ style >

This simply means: show the admin actions if body id is 'admin', show the user actions if body id is 'user' and, show the guest actions if body id is 'guest'.

One more important thing to mention; this scheme introduces (or rather makes easier) a security vulnerability to the application. An opponent, assuming a guest role, can use any client side tool to change the resulting html, to show the hidden admin controls and use them. Obviously, nothing prevents such an attack, even if the proposed scheme was not used. The point is that we're making it easier for him. So, the bottom line is, different roles actions that result from hidden controls must be secured on the server side. Typically, we give everybody the control to DO something, but make sure they are privileged to DO it when they try to.


Thursday, May 8, 2008

Rails: When Functional Tests are not Functional !!

One of the most well-known virtues of the Rails framework is that everything has its own place to go in the project hierarchy. A perfect example is testing. Everything that is associated with tests goes under the "test" directory. Moreover, further grouping take place under the "test" directory. Unit tests are to be used when testing models functionality, and functional tests are to be used when testing controllers functionality. The good hidden feature in this modular division scheme is that, the shape of the functional test cases you write to cover your controllers code actually measures the degree of your code quality.

A typically 'good' rails functional test case could look like this:

def test_create_new_item
post :create, { :name => "name",
:description => "description" },
{ :session_key1 => "session_value1" }
assert_response :success
assert_template "create"
end

while a 'bad' rails functional test case could look like this:

def test_create_new_item
post :create, { :name => "name",
:description => "description" },
{ :session_key1 => "session_value1" }
assert_not_nil Item.find_by_name("name")
assert_equal users(:user1).item_count, 10
assert_response :success
assert_template "create"
end

So, what exactly makes the first example better than the second? Basically, functional tests is just about simulating requests and testing the 'behavior' of the controller action in such requests. The word 'behavior' means the group of characteristics of the request and response. For example, A good functional test case is that simulates a request, then just asserts for the request and/or response headers and/or body, like the first example. A bad example is asserting for a specific state of the data after the request is handled, like the second example. This is not right because, in such a case, you are making one of two mistakes:

-You are adding some redundant assertions to test some model functionality that is already covered in unit tests.
-Your controllers incorporate some model-level chunks of code. Consequently, you needed to cover them in the functional tests. This is a strong indicator that these chunks of code are misplaced in the controllers. They sure need to be abstracted to some model functionality, being covered in unit tests.

So the bottom line is, in your functional tests, mind only the 'behavior' of the controller action you're testing. If you find your functional test cases not covering all controller code after all, try to reconsider your controller code. Go for some model abstraction of your controller code, and cover the new model code in the unit tests.


Rails: ActionMailer with TLS (SSL)

Starting to work with Ruby on Rails, ActionMailer was quite a relief. Writing code for mass mailing was just a matter of configuration. You just need to configure your application mailer to use a given smtp server account to forward emails generated in the application. The common problem I faced was that ActionMailer does not support TLS, i.e. it cannot be configured to use an smtp account on a server that uses SSL for authentication.

The typical configuration in such a case is (using an smtp server that doesn't require SSL fro authentication) :

config.action_mailer.smtp_settings = {
:address => 'smtp.mailserver.com',
:port => 123,
:domain => "your domain name",
:authentication => :login,
:user_name => "account@mailserver.com",
:password => "account_password"
}

Or maybe you could use an smtp server that doesn't require authentication at all, disregarding that it will be considered as spam by
default by most respectful email providers:

config.action_mailer.smtp_settings = {
:address => 'smtp.mailserver.com',
:port => 123,
:domain => "your domain name",
:authentication => :plain
}

That's it for the problem. We need to use the great ActionMailer, with a respectful smtp server that requires SSL at the same time. The solution is the magical plugin action_mailer_tls. You just download and install the plugin, and add one line to the smtp settings:

config.action_mailer.smtp_settings = {
:address => 'smtp.mailserver.com',
:port => 123,
:domain => "your domain name",
:authentication => :login,
:user_name => "account@mailserver.com",
:password => "account_password",
:tls => true
}

That's it. Now you can generate mails and use that smtp server with TLS to forward your emails. Pretty handy, right?


Web 2.0: Client-side Views

In a previous article, We saw how DOM manipulation can become so handy using JavaScript Templates. Massive changes could be applied to the html document hierarchy using one-line JavaScript functions. In this article we will introduce a generic web development technique that we can name Client-side Views.We will build on our knowledge of JST in order to compose a closed set of tools to follow client-side views with minimal effort. At the end we will get to know the benefits obtained by following such a technique.

What is Client-side Views?

Client-side views can be defined as the concept of abstracting the server-side of a web application to respond to different actions with a standard non-graphical data format, leaving the task of forming a graphical representation of this data to its client-side. This means that the server will not respond with formatted html as usual. However, it will respond with some data format (like xml). Notice that, obviously, this technique helps future transformation of a web application into an abstract web service.

A Closed Set of Tools

Now let's introduce a closed set of tools that will help us follow client-side views with minimal effort. As it can be guessed, there is no restriction on the server-side development tools. However, on the client-side we'll stick to JavaScript, being the most famous standard client-side scripting language. In addition we'll select Prototype Ajax, JSON and JavaScript Templates.
Prototype Ajax
Protoype JS library contains a very powerful abstraction of almost all the functionality of XMLHttpRequest (xhr). Hiding most of the details, Prototype Ajax introduces a very nice interface to the native xhr Ajax. In this context, we are interested in only one api, which is "Ajax.Request". We will send an ajax request to the server, expecting it to respond with a known data format (discussed in next section). For more information about Ajax.Request api, visit its documentation page.

JSON

As mentioned before, we will send an ajax request to the server, expecting a response with an agreed on format. The most handy data format that supports our technique is JSON. JSON stands for "JavaScript object notation". It is the representation of an object in JavaScript. As we are using JavaScript as a language, we can't be happier if the server responds with the notation that JavaScript best understands. Almost all programming/scripting languages have their own libraries for JSON manipulation. For example, in Ruby on Rails, we can just call "object.to_json" to obtain the JSON representation of the ruby object. for more details about JSON and its libraries in different languages, visit json.org.

Combining with JST

Combining with JavaScript Templates (introduced in the previous articles), our adopted scenario will be a close variant of the following:

- Send an Ajax request to the server application, providing a callback that's called on success of the request.
- The server responds with JSON format of all data needed to transfer the state of the front-end representation of the application.
- On request success, the callback we provided is called. A typical client-side renderer callback will look as follows:

function callback(transport){
//evaluate the JSON string returned from the server
//(one line)
var jsData = eval( '(' + transport.responseText + ')' );

//evaluate the resulting html provided the data
//obtained from the server
var newHTML = TrimPath.processDOMTemplate('jst_id',
{ jstData: jsData } );

//insert the newly formatted html in its placeholder
$('placeholder_id').innerHTML = newHTML;
}

Hurray.. We have just transferred the view rendering logic that we used to follow on the server to the client, using a three-line JS function.

Benefits

Let's compare our technique to the traditional ajax technique of sending an ajax request, forming an html snippet at the server, and returning it to the client to insert it in its placeholder. Our benefits are:

- Ability of developing multiple clients with different graphical representation to the same web application (the concept of web services).
- Massive reduction to the server response time; the time that used to be consumed in evaluating server-side templates is now reduced and divided among other requests.
- Massive reduction to the bandwidth used; the size of JSON text of some data is dramatically less than its formatted html representation.

However, there is a disadvantage we can't totally avoid; our typical three-line rendering function incorporates extensive string processing (parsing the template and evaluating the output html using the supplied date). It's a known fact that JavaScript is relatively slow on most web browsers when it comes to string operations. So, at least, one has to watch out for the performance measure, try to optimize his code as far as possible, just not to make things worse.


Friday, May 2, 2008

JavaScript Templates: A major leap on the way of Web2.0

An undisputed fact is that JavaScript acts as a cornerstone to Web 2.0 application development. Web 2.0 applications extensively rely on JavaScript logic to manipulate DOM elements on the fly. let's consider a typical example. A JavaScript function is needed to generate some HTML fragment and insert it into a placeholder that already exists in the document. Let's say that the resulting fragment represents some user's data and that it is required to look like this:

<img src='/users/12/image' title='Haitham Mohammad' alt='Haitham Mohammad' />
<label class='user_label'> Name: </label>
<span class='user_name'> Haitham Mohammad </span>
<label class='user_label'> Telephone: </label>
<span class='user_telephone'> 0127038199 </span>

Now let's see how that JavaScript function would look like: (I know it's not minimal, but it's a typical example. let's say that it will, at least, look like that)

showUserDetails = function(user){
placeHolder = document.getElementById('place_holder');

img = document.createElement
img = document.createElement('img');
img.src = user.imageSource;
img.title = user.name;
img.alt = user.name;
placeHolder.appendChild(img);

nameLabel = document.createElement('label');
nameLabel.className = 'user_label';
nameLabel.innerHTML = 'Name: ';
placeHolder.appendChild(nameLabel);

nameSpan = document.createElement('span');
nameSpan.className = 'user_name';
nameSpan.innerHTML = user.name;
placeHolder.appendChild(nameSpan);

telLabel = document.createElement('label');
telLabel.className = 'user_label';
telLabel.innerHTML = 'Telephone: ';
placeHolder.appendChild(telLabel);

telSpan = document.createElement('span');
telSpan.className = 'user_telephone';
telSpan.innerHTML = user.telephone;
placeHolder.appendChild(telSpan);
}

Yes, I know, it's long and ugly. However, this is what I used to do in my early couple of months in JavaScript. Now let's formulate the problem that lead to this long and ugly code. The problem is that HTML is a descriptive language while JavaScript is a procedural one. The anomaly appeared when we tried to 'describe' our HTML fragment using JavaScript 'procedure'. i.e. When we tried to override the descriptive nature of HTML using the procedural nature of JavaScript.

Now let's be a bit positive. The solution simply exists in a relatively new open source JavaScript library called JavaScript Templates (or JST), developed by TrimPath . A JavaScript Template is an HTML template that is yet to be evaluated using some JavaScript variables. This principle is so close to all server-side templates like jsp, erb, ..etc.

Now, the new steps are: i) write the JST that DESCRIBES your data representation. ii)write the JavaScript function that EVALUATES that template some given JavaScript variables as data parameters. Let's apply to the example.

Performing step (i), Our JST will look like that:

<textarea id='jst_user_details' style='display: none;'>
<img src='${theUser.imageSource}' title='${theUser.name}'
alt='${theUser.name}' />
<label class='user_label'> Name: </label>
<span class='user_name'> ${theUser.name} </span>
<label class='user_label'> Telephone: </label>
<span class='user_telephone'> ${theUser.telephone} </span>
</textarea>

We hold the template code inside a hidden textarea with a known id just to be able to get the template afterwards at runtime. The evaluation sign '${}' means that what's inside the curly braces is still to be evaluated afterwards at runtime.

Now comes the magic in step (ii). Our long and ugly JavaScript function will be replaced with this one:

showUserDetails = function(user){
userHTML = TrimPath.processDOMTemplate('jst_user_details',
{ theUser: user } );
placeHolder = document.getElementById('place_holder');
placeHolder.innerHTML = userHTML;
}

We simply called a function that processes the JST contained inside the textarea with id = 'jst_user_details', given one data variable named 'theUser' with the same value as the parameter 'user'. And, tadaa, we have a pure HTML fragment that describes a certain user's details.

JST also supports basic functionality of any server-side template language, like repitition and conditionals. A JST can typically contain the following snippet:

{for item in itemss}
{if user.admin}
<input type='button' value='edit' ...
{else}
<input type='button' value='view' ...
{/if}
{/for}

To sum it up, what JST does is eliminating the problem we introduced before. It simply separates HTML description from JavaScript procedures. Talking about myself, It made me at least four times as productive as I used to be using traditional DOM manipulations. Besides, your code becomes more neat, more modular, and more organized. Go ahead and give it a try: http://code.google.com/p/trimpath/wiki/JavaScriptTemplates


Thursday, May 1, 2008

Prototype JS incompatibility with Facebook JS client library

I have been working on a facebook application using their all new JavaScript client library. I was trying to reuse a large self-contained module that relied heavily on prototype & scriptaculous JS libraries. The module was working just fine outside the context of the Facebook application. But after including the Facebook JS library, runtime JS errors began to appear, especially when initiating a scriptaculous effect. Obvoiusly, that was concluded to be a compatibility issue between the prototype & scriptacuolous family on one side, and Facebook JS client library on the other.

The origin of incompatibility, courtesy of Mohammad Ali, was discovered to be a tiny bug in Prototype's browser identification. To the point, the problem was discovered to be as follows:
  1. Prototype identifies the browser as IE according to the presence of the IE specific function 'attachEvent'. In prototype.js line 13:
    IE: !!(window.attachEvent && !window.opera),
  2. Obviously, this condition is fragile, and will fail if any other included script defines a "window.attachEvent" function for any other browser, which is the exact case of Facebook JS client library.
  3. The Facebook library contains thefollowing line (FacebookApi.debug.js line 387):
    if (window.navigator.userAgent.indexOf('Safari') >= 0) {
    _loadSafariCompat(window);
    }
    else {
    _loadMozillaCompat(window);
    }
    This code is embedded in an if-condition that it is only executed if the browser is not IE.
  4. The function "_loadMozillaCompat" defines "window.attachEvent" for mozilla browser type, which causes Prototype's IE identification condition to fail, leading to a total mess up in many cases.

    The solution is to power up Prototype's IE identification condition, by adding a check that the browser is not mozilla. The new condition looks like that ( and it works :) )
    IE: !!(window.attachEvent && !window.opera && navigator.userAgent.indexOf('Gecko') == -1),
This will do it for this specific incompatibility instance. However, the sure thing is that the foundation of Prototype's browser identification needs a basic tweak to be concrete against other libraries.