Thursday, May 26, 2016

HTTP Request Testing: A Developer's Survival Tool - Kapil Sharma

What To Do When A Testing Suite Isn’t Feasible

There are times that we – programmers and/or our clients – have limited resources with which to write both the expected deliverable and the automated tests for that deliverable. When the application is small enough you can cut corners and skip tests because you remember (mostly) what happens elsewhere in the code when you add a feature, fix a bug, or refactor. That said, we won’t always work with small applications, plus, they tend to get bigger and more complex over time. This makes manual testing difficult and super annoying.
For my last few projects, I was forced to work without automated testing and honestly, it was embarrassing to have the client email me after a code push to say that the application was breaking in places where I hadn’t even touched the code.
So, in cases where my my client either had no budget or intention of adding any automated test framework, I started testing the whole website’s basic functionality by sending an HTTP request to each individual page, parsing the response headers and looking for the ‘200’ response. It sounds plain and simple, but there is a lot you can do to ensure fidelity without actually having to write any tests, unit, functional, or integration.
While there are merits to all three types of testing, they don’t get written in most of projects.Tests don’t get written in a lot of contract projects for a variety of reasons, so what can you do?

Automated Testing

In web development, automated tests comprise of three major test types: unit tests, functional tests and integration tests. We often combine unit tests with functional and integration tests to make sure everything runs smoothly as a whole application. When these tests are run in unison, or sequentially (preferably with single command or click), we start calling them automated tests, unit or not.
Largely the purpose of these tests (at least in web dev) is to make sure all application pages are rendered without trouble, free from fatal (application halting) errors or bugs.

Unit Testing

Unit testing is a software development process in which the smallest parts of code – units – are independently tested for correct operation. Here’s an example in Ruby:
test “should return active users” do
     active_user =  create(:user, active: true)
     non_active_user = create(:user, active: false)
     result = User.active

     assert_equal result, [active_user]
end

Functional Testing

Functional testing is a technique used to check the features and functionality of the system or software, designed to cover all user interaction scenarios, including failure paths and boundary cases.
Note: all our examples are in Ruby.
test "should get index" do
    get :index
    assert_response :success
    assert_not_nil assigns(:object)
end

Integration Testing

Once the modules are unit tested, they are integrated one by one, sequentially, to check the combinational behavior, and to validate that the requirements are implemented correctly.
test "login and browse site" do
    # login via https
    https!
    get "/login"
    assert_response :success
 
    post_via_redirect "/login", username: users(:david).username, password: users(:david).password
    assert_equal '/welcome', path
    assert_equal 'Welcome david!', flash[:notice]
 
    https!(false)
    get "/articles/all"
    assert_response :success
    assert assigns(:articles)
end

Tests in an Ideal World

Testing is widely accepted in the industry and it makes sense; good tests let you:
  • Quality assure your whole application with the least human effort
  • Identify bugs more easily because you know exactly where your code is breaking from test failures
  • Create automatic documentation for your code
  • Avoid ‘coding constipation’, which, according to some dude on Stack Overflow, is a humorous way of saying, “when you don’t know what to write next, or you have a daunting task in front of you, start by writing small.”
I could go on and on about how awesome tests are, and how they changed the world and yada yada yada, but you get the point. Conceptually, tests are awesome.
Tests in the Real World
While there are merits to all three types of testing, they don’t get written in most of projects. Why? Well, let me break it down:

Time/Deadlines

Everyone has deadlines, and writing fresh tests can get in the way of meeting one. It can take time and a half (or more) to write an application and its respective tests. Now, some of you do not agree with this, citing time saved ultimately, but I don’t think this is the case and I’ll explain why in ‘Difference of Opinion’.

Client Issues

Often, the client doesn’t really understand what testing is, or why it has value for the application. Clients tend to be more concerned with rapid product delivery and therefore see programmatic testing as counterproductive.
Or, it may be as simple as the client not having the budget to pay for the extra time needed to implement these tests.

Lack of Knowledge

There is a sizeable tribe of developers in the real world that doesn’t know testing exists. At every conference, meetup, concert, (even in my dreams), I meet developers that don’t know how to write tests, don’t know what to test, don’t know how to setup the framework for testing, and so on. Testing isn’t exactly taught in schools, and it can be a hassle to set up/learn the framework to get them running. So yes, there’s a definite barrier to entry.

‘It’s a Lot of Work’

Writing tests can be overwhelming for both new and experienced programmers, even for those world-changer genius types, and to top it off, writing tests isn’t exciting. One may think, “Why should I engage in unexciting busywork when I could be implementing a major feature with results that will impress my client?” It’s a tough argument.
Last, but not least, it is hard to write tests and computer-science students are not trained for it.
Oh, and refactoring with unit tests is no fun.

Difference in Opinion

In my opinion, unit testing makes sense for algorithmic logic but not so much for coordinating living code.
People claim that even though you’re investing extra time up front in writing tests, it saves you hours later when debugging or changing code. I beg to differ and offer one question: Is your code static, or ever changing?
For most of us, it’s ever changing. If you are writing successful software, you’re always adding features, changing existing ones, removing them, eating them, whatever, and to accommodate these changes, you must keep changing your tests, and changing your tests takes time.

But, You Need Some Kind Of Testing

No one will argue that lacking any sort of testing is the worst possible case. After making changes in your code, you need to confirm that it actually works. A lot of programmers try to manually test the basics: Is the page rendering in the browser? Is the form being submitted? Is the correct content being displayed? And so on, but in my opinion, this is barbaric, inefficient and labour intensive.
No one will argue that lacking any sort of testing is the worst possible case.

What I Use Instead

The purpose of testing a web app, be it manually or automated, is to confirm that any given page is rendered in the user’s browser without any fatal errors, and that it shows its content correctly. One way (and in most cases, an easier way) to achieve this is by sending HTTP requests to the endpoints of the app and parse the response. The response code tells you whether the page was delivered successfully. It’s easy to test for content by parsing the response body of the HTTP request and searching for specific text string matches, or, you can be one step fancier and use web scraping libraries such as nokogiri.
If some endpoints require a user login, you can use libraries designed for automating interactions (ideal when doing integration tests) such as mechanize to login or click on certain links. Really, in the big picture of automated testing, this looks a lot like integration or functional testing (depending on how you use them), but it’s a lot quicker to write and can be included in an existing project, or added to a new one, with less effort than setting up whole testing framework. Spot on!
Edge cases present another problem when dealing with large databases with a wide range of values; testing whether our application is working smoothly across all anticipated datasets can be daunting.
One way to go about it is to anticipate all the edge cases (which is not merely difficult, it’s often impossible) and write a test for each one. This could easily become hundreds of lines of code (imagine the horror) and cumbersome to maintain. Yet, with HTTP requests and just one line of code, you can test such edge cases directly on the data from production, downloaded locally on your development machine or on a staging server.
Now of course, this testing technique is not a silver bullet and has lots of shortcomings, the same as any other method, but I find these types of tests faster, and easier, to write and modify.
With HTTP requests & one line of code, you can test such edge cases directly on data from production.

In Practice: Testing with HTTP requests

Since we’ve already established that writing code without any kind of accompanying tests isn’t a good idea, my very basic go-to test for an entire application is to send HTTP requests to all its pages locally and parse the response headers for a 200 (or desired) code.
For example, if we were to write the above tests (the ones looking for specific content and a fatal error) with an HTTP request instead (in Ruby), it would be something like this:
# testing for fatal error
http_code = `curl -X #{route[:method]} -s -o /dev/null -w "%{http_code}" #{Rails.application.routes.url_helpers.articles_url(host: 'localhost', port: 3000)
}`
if  http_code !~ /200/
    return “articles_url returned with #{http_code} http code.”
end


# testing for content
 active_user =  create(:user, name: “user1”, active: true)
 non_active_user = create(:user, name: “user2”, active: false)
content = `curl #{Rails.application.routes.url_helpers.active_user_url(host: 'localhost', port: 3000)
}`
if content !~ /#{active_user.name}/
     returnContent mismatch active user #{active_user.name} not found in text body” #You can customise message to your liking
end
if content =~ /#{non_active_user.name}/
     returnContent mismatch non active user #{active_user.name} found in text body” #You can customise message to your liking
end
The line curl -X #{route[:method]} -s -o /dev/null -w "%{http_code}" #{Rails.application.routes.url_helpers.articles_url(host: 'localhost', port: 3000) }covers a lot of test cases; any method raising an error on the article’s page will be caught here, so it effectively covers hundreds of lines of code in one test.
The second part, which catches the content error specifically, can be used multiple times to check the content on a page. (More complex requests can be handled using mechanize, but that’s beyond the scope of this blog.)
Now, in cases where you want to test if a specific page works on a large, varied set of database values (for example, your article page template is working for all the articles in the production database), you could do:
ids = Article.all.select { |post| `curl -s -o /dev/null -w “%{http_code}#{Rails.application.routes.url_helpers.article_url(post, host: 'localhost', port: 3000)
}`.to_i != 200).map(&:id)
return ids
This will return an array of IDs of all the articles in the database that were not rendered, so now you can manually go to the specific article page and check out the problem.
Now, I understand that this way of testing might not work in certain cases, such as testing a standalone script or sending an email, and it is undeniably slower than unit tests because we are making direct calls to an endpoint for each test, but when you can’t have unit tests, or functional tests, or both, this is better than nothing.
How would you go about structuring these tests? With small, non-complex projects, you can write all your tests in one file and run that file each time before you commit your changes, but most projects will require a suite of tests.
I usually write two to three tests per endpoint, depending on what I’m testing. You can also try testing individual content (similar to unit testing), but I think that would be redundant and slow since you will be making an HTTP call for every unit. But, on the other hand, they will be cleaner and easy to understand.
I recommend putting your tests in your regular test folder with each major end point having its own file (in Rails, for example, each model/controller would have one file each), and this file can be divided into three parts according to what we are testing. I often have at least three tests:

Test One

Check that the page returns without any fatal errors.
Test one checks that the page returns without any fatal errors.
Note how I made a list of all the endpoints for Post and iterated over it to check that each page is rendered without any error. Assuming everything went well, and all the pages were rendered, you will see something like this in the terminal: ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of failed url(s) -- []
If any page is not rendered, you will see something like this (in this example, the posts/index page has error and hence is not rendered): ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of failed url(s) -- [{:url=>”posts_url”, :params=>[], :method=>”GET”, :http_code=>”500”}]

Test Two

Confirm that all the expected content is there:
Test two confirms that all the expected content is there.
If all the content we expect is found on the page, the result looks like this (in this example we make sure posts/:id has a post title, description and a status): ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of content(s) not found on Post#show page with post id: 1 -- []
If any expected content is not found on the page (here we expect the page to show status of post - ‘Active’ if post is active, ‘Disabled’ if post is disabled) the result looks like this: ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of content(s) not found on Post#show page with post id: 1 -- [“Active”]

Test Three

Check that the page renders across all datasets (if any):
Test 3 checks that the page renders across all datasets.
If all the pages are rendered without any error, we will get an empty list: ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of post(s) with error in rendering -- []
If the content of some of the records has a problem rendering (in this example, pages with the ID 2 and 5 are giving an error) the result looks like this: ➜ sample_app git:(master) ✗ ruby test/http_request/post_test.rb List of post(s) with error on rendering -- [2,5]
If you want to fiddle around with the above demonstration code, here’s my github project.

So Which Is Better? It Depends…

HTTP Request testing might be your best bet if:
  • You’re working with a web app
  • You’re in a time crunch and want to write something fast
  • You’re working with a big project, pre-existing project where tests were not written, but you still want some way to check code
  • Your code involves simple request and response
  • You don’t want to spend a large portion of your time maintaining tests (I read somewhere unit test = maintenance hell, and I partially agree with him/her)
  • You want to test if an application works across all the values in an existing database
Traditional testing is ideal when:
  • You’re dealing with something other than a web application, such as scripts
  • You’re writing complex, algorithmic code
  • You have time and budget to dedicate to writing tests
  • The business requires bug-free or a low-error rate (finance, large user base)
Thanks for reading the article; you should now have a method for testing you can default to, one you can count on when you’re pressed for time.
This article was written by BHUSHAN LODHAToptal Ruby developer.

Tuesday, May 17, 2016

Declarative Programming: Is It A Real Thing? - Kapil Sharma

Declarative programming is, currently, the dominant paradigm of an extensive and diverse set of domains such as databases, templating and configuration management.
In a nutshell, declarative programming consists of instructing a program on what needs to be done, instead of telling it how to do it. In practice, this approach entails providing a domain-specific language (DSL) for expressing what the user wants, and shielding them from the low-level constructs (loops, conditionals, assignments) that materialize the desired end state.
While this paradigm is a remarkable improvement over the imperative approach that it replaced, I contend that declarative programming has significant limitations, limitations that I explore in this article. Moreover, I propose a dual approach that captures the benefits of declarative programming while superseding its limitations.
CAVEATThis article emerged as the result of a multi-year personal struggle with declarative tools. Many of the claims I present here are not thoroughly proven, and some are even presented at face value. A proper critique of declarative programming would take considerable time, effort, and I would have to go back and use many of these tools; my heart is not in such an undertaking. The purpose of this article is to share a few thoughts with you, pulling no punches, and showing what worked for me. If you’ve struggled with declarative programming tools, you might find respite and alternatives. And if you enjoy the paradigm and its tools, don’t take me too seriously.
If declarative programming works well for you, I’m in no position to tell you otherwise.
You can love or hate declarative programming, but you cannot afford to ignore it.You can love or hate declarative programming, but you cannot afford to ignore it.

The Merits Of Declarative Programming

Before we explore the limits of declarative programming, it is necessary to understand its merits.
Arguably the most successful declarative programming tool is the relational database (RDB). It might even be the first declarative tool. In any case, RDBs exhibit the two properties that I consider archetypical of declarative programming:
  • A domain specific language (DSL): the universal interface for relational databases is a DSL namedStructured Query Language, most commonly known as SQL.
  • The DSL hides the lower level layer from the user: ever since Edgar F. Codd’s original paper on RDBs, it is plain that the power of this model is to dissociate the desired queries from the underlying loops, indexes and access paths that implement them.
Before RDBs, most database systems were accessed through imperative code, which is heavily dependent on low-level details such as the order of records, indexes and the physical paths to the data itself. Because these elements change over time, code often stops working because of some underlying change in the structure of the data. The resulting code is hard to write, hard to debug, hard to read and hard to maintain. I’ll go out a limb and say that most of this code was in, all likelihood, long, full of proverbial rats’ nests of conditionals, repetition and subtle, state-dependent bugs.
In the face of this, RDBs provided a tremendous productivity leap for systems developers. Now, instead of thousands of lines of imperative code, you had a clearly defined data scheme, plus hundreds (or even just tens) of queries. As a result, applications had only to deal with an abstract, meaningful and lasting representation of data, and interface it through a powerful, yet simple query language. The RDB probably raised the productivity of programmers, and companies that employed them, by an order of magnitude.
What are the commonly listed advantages of declarative programming?
Proponents of declarative programming are quick to point out the advantages. However, even they admit it comes with trade-offs.Proponents of declarative programming are quick to point out the advantages. However, even they admit it comes with trade-offs.
  1. Readability/usability: a DSL is usually closer to a natural language (like English) than to pseudocode, hence more readable and also easier to learn by non-programmers.
  2. Succinctness: much of the boilerplate is abstracted by the DSL, leaving less lines to do the same work.
  3. Reuse: it is easier to create code that can be used for different purposes; something that’s notoriously hard when using imperative constructs.
  4. Idempotence: you can work with end states and let the program figure it out for you. For example, through an upsert operation, you can either insert a row if it is not there, or modify it if it is already there, instead of writing code to deal with both cases.
  5. Error recovery: it is easy to specify a construct that will stop at the first error instead of having to add error listeners for every possible error. (If you’ve ever written three nested callbacks in node.js, you know what I mean.)
  6. Referential transparency: although this advantage is commonly associated with functional programming, it is actually valid for any approach that minimizes manual handling of state and relies on side effects.
  7. Commutativity: the possibility of expressing an end state without having to specify the actual order in which it will be implemented.
While the above are all commonly cited advantages of declarative programming, I would like to condense them into two qualities, which will serve as guiding principles when I propose an alternative approach.
  1. A high-level layer tailored to a specific domain: declarative programming creates a high-level layer using the information of the domain to which it applies. It is clear that if we’re dealing with databases, we want a set of operations for dealing with data. Most of the seven advantages above stem from the creation of a high-level layer that is precisely tailored to a specific problem domain.
  2. Poka-yoke (fool-proofness): a domain-tailored high-level layer hides the imperative details of the implementation. This means that you commit far fewer errors because the low-level details of the system are simply not accessible. This limitation eliminates many classes of errors from your code.

Two Problems With Declarative Programming

In the following two sections, I will present the two main problems of declarative programming:separateness and lack of unfolding. Every critique needs its bogeyman, so I will use HTML templating systems as a concrete example of the shortcomings of declarative programming.

The Problem With DSLs: Separateness

Imagine that you need to write a web application with a non-trivial number of views. Hard coding these views into a set of HTML files is not an option because many components of these pages change.
The most straightforward solution, which is to generate HTML by concatenating strings, seems so horrible that you will quickly look for an alternative. The standard solution is to use a template system. Although there are different types of template systems, we will sidestep their differences for the purpose of this analysis. We can consider all of them to be similar in that the main mission of template systems is to provide an alternative to code that concatenates HTML strings using conditionals and loops, much like RDBs emerged as an alternative to code that looped through data records.
Let’s suppose we go with a standard templating system; you will encounter three sources of friction, which I will list in ascending order of importance. The first is that the template necessarily resides in a file separate from your code. Because the templating system uses a DSL, the syntax is different, so it cannot be in the same file. In simple projects, where file counts are low, the need to keep separate template files may duplicate or treble the amount of files.
I open an exception for Embedded Ruby templates (ERB), because those are integrated into Ruby source code. This is not the case for ERB-inspired tools written in other languages since those templates must also be stored as different files.
The second source of friction is that the DSL has its own syntax, one different from that of your programming language. Hence, modifying the DSL (let alone writing your own) is considerably harder. To go under the hood and change the tool, you need to learn about tokenizing and parsing, which is interesting and challenging, but hard. I happen to see this as a disadvantage.
How can one vizualise a DSL? It’s not easy, but let’s just say a DSL is a clean, shiny layer on top of low-level constructs.How can one vizualise a DSL? It’s not easy, but let’s just say a DSL is a clean, shiny layer on top of low-level constructs.
You may ask, “Why on earth would you want to modify your tool? If you are doing a standard project, a well-written standard tool should fit the bill.” Maybe yes, maybe no.
A DSL never has the full power of a programming language. If it did, it wouldn’t be a DSL anymore, but rather a full programming language.
But isn’t that the whole point of a DSL? To not have the full power of a programming language available, so that we can achieve abstraction and eliminate most sources of bugs? Maybe, yes. However, mostDSLs start simple and then gradually incorporate a growing number of the facilities of a programming language until, in fact, it becomes one. Template systems are a perfect example. Let’s see the standard features of template systems and how they correlate to programming language facilities:
  • Replace text within a template: variable substitution.
  • Repetition of a template: loops.
  • Avoid printing a template if a condition is not met: conditionals.
  • Partials: subroutines.
  • Helpers: subroutines (the only difference with partials is that helpers can access the underlying programming language and let you out of the DSL straightjacket).
This argument, that a DSL is limited because it simultaneously covets and rejects the power of a programming language, is directly proportional to the extent that the features of the DSL are directly mappable to the features of a programming language. In the case of SQL, the argument is weak because most of the things SQL offers are nothing like what you find in a normal programming language. At the other end of the spectrum, we find template systems where virtually every feature is making the DSL converge towards BASIC.
Let’s now step back and contemplate these three quintessential sources of friction, summed up by the concept of separateness. Because it is separate, a DSL needs to be located on a separate file; it is harder to modify (and even harder to write your own), and (often, but not always) needs you to add, one by one, the features you miss from a real programming language.
Separateness is an inherent problem of any DSL, no matter how well designed.
We now turn to a second problem of declarative tools, which is widespread but not inherent.

Another Problem: Lack Of Unfolding Leads To Complexity

If I had written this article a few months ago, this section would have been named Most Declarative Tools Are #@!$#@! Complex But I Don’t Know Why. In the process of writing this article I found a better way of putting it: Most Declarative Tools Are Way More Complex Than They Need To Be. I will spend the rest of this section explaining why. To analyze the complexity of a tool, I propose a measure called thecomplexity gap. The complexity gap is the difference between solving a given problem with a tool versus solving it in the lower level (presumably, plain imperative code) that the tool intends to replace. When the former solution is more complex than the latter, we are in presence of the complexity gap. By more complex, I mean more lines of code, code that’s harder to read, harder to modify and harder to maintain, but not necessarily all of these at the same time.
Please note that we’re not comparing the lower level solution against the best possible tool, but rather against no tool. This echoes the medical principle of “First, do no harm”.
Signs of a tool with a large complexity gap are:
  • Something that takes a few minutes to describe in rich detail in imperative terms will take hours to code using the tool, even when you know how to use the tool.
  • You feel you are constantly working around the tool rather than with the tool.
  • You are struggling to solve a straightforward problem that squarely belongs in the domain of the tool you are using, but the best Stack Overflow answer you find describes a workaround.
  • When this very straightforward problem could be solved by a certain feature (which does not exist in the tool) and you see a Github issue in the library that features a long discussion of said feature with+1s interspersed.
  • A chronic, itching, longing to ditch the tool and do the whole thing yourself inside a _ for- loop_.
I might have fallen prey to emotion here since template systems are not that complex, but this comparatively small complexity gap is not a merit of their design, but rather because the domain of applicability is quite simple (remember, we’re just generating HTML here). Whenever the same approach is used for a more complex domain (such as configuration management) the complexity gap may quickly turn your project into a quagmire.
That said, it is not necessarily unacceptable for a tool to be somewhat more complex than the lower level it intends to replace; if the tool yields code that is more readable, concise and correct, it can be worth it t. It’s an issue when the tool is several times more complex than the problem it replaces; this is flat-out unacceptable. Brian Kernighan famously stated that, “Controlling complexity is the essence of computer programming.” If a tool adds significant complexity to your project, why even use it?
The question is, why are some declarative tools so much more complex than they need be? I think it would be a mistake to blame it on poor design. Such a general explanation, a blanket ad-hominem attack on the authors of these tools, is not fair. There has to be a more accurate and enlightening explanation.
Origami time! A tool with a high-level interface to an abstract lower level has to unfold the higher level from the lower one.Origami time! Origami time! A tool with a high-level interface to an abstract lower level has to unfold the higher level from the lower one.
My contention is that any tool that offers a high level interface to abstract a lower level must unfold this higher level from the lower one. The concept of unfolding comes from Christopher Alexander’s magnum opus, The Nature of Order - in particular Volume II. It is (hopelessly) beyond the scope of this article (not to mention my understanding) to summarize the implications of this monumental work for software design; I believe its impact will be huge in years to come. It is also beyond this article to provide a rigorous definition of unfolding processes. I will use here the concept in a heuristic way.
An unfolding process is one that, in a stepwise fashion, creates further structure without negating the existing one. At every step, each change (or differentiation, to use Alexander’s term) remains in harmony with any previous structure, when previous structure is, simply, a crystallized sequence of past changes.
Interestingly enough, Unix is a great example of the unfolding of a higher level from a lower one. In Unix, two complex features of the operative system, batch jobs and coroutines (pipes), are simply extensions of basic commands. Because of certain fundamental design decisions, such as making everything a stream of bytes, the shell being a userland program and standard I/O files, Unix is able to provide these sophisticated features with minimal complexity.
To underline why these are excellent examples of unfolding, I would like to quote a few excerpts of a1979 paper by Dennis Ritchie, one of the authors of Unix:
On batch jobs:
… the new process control scheme instantly rendered some very valuable features trivial to implement; for example detached processes (with &) and recursive use of the shell as a command. Most systems have to supply some sort of special batch job submission facility and a special command interpreter for files distinct from the one used interactively.
On coroutines:
The genius of the Unix pipeline is precisely that it is constructed from the very same commands used constantly in simplex fashion.
UNIX pioneers Dennis Ritchie and Ken Thompson created a powerful demonstration of unfolding in their OS. They also saved us from a dystopian all-Windows future.UNIX pioneers Dennis Ritchie and Ken Thompson created a powerful demonstration of unfolding in their OS. They also saved us from a dystopian all-Windows future.
This elegance and simplicity, I argue, comes from an unfolding process. Batch jobs and coroutines are unfolded from previous structures (commands run in a userland shell). I believe that because of the minimalist philosophy and limited resources of the team that created Unix, the system evolved stepwise, and as such, was able to incorporate advanced features without turning its back on to the basic ones because there weren’t enough resources to do otherwise.
In the absence of an unfolding process, the high level will be considerably more complex than necessary. In other words, the complexity of most declarative tools stem from the fact that their high level does not unfold from the low level they intend to replace.
This lack of unfoldance, if you forgive the neologism, is routinely justified by the necessity to shield the user from the lower level. This emphasis on poka-yoke (protecting the user from low level errors) comes at the expense of a large complexity gap that is self-defeating because the extra complexity will generate new classes of errors. To add insult to injury, these classes of errors have nothing to do with the problem domain but rather with the tool itself. We would not go too far if we describe these errors as iatrogenic.
Declarative templating tools, at least when applied to the task of generating HTML views, are an archetypical case of a high level that turns its back on the low level it intends to replace. How so? Because generating any non-trivial view requires logic, and templating systems, especially logic-less ones, banish logic through the main door and then smuggle some of it back through the cat door.
Note: An even weaker justification for a large complexity gap is when a tool is marketed as magic, or something that just works, the opaqueness of the low level is supposed to be an asset because a magic tool is always supposed to work without you understanding why or how. In my experience, the more magical a tool purports to be, the faster it transmutes my enthusiasm into frustration.
But what about the separation of concerns? Shouldn’t view and logic remain separate? The core mistake, here, is to put business logic and presentation logic in the same bag. Business logic certainly has no place in a template, but presentation logic exists nevertheless. Excluding logic from templates pushes presentation logic into the server where it is awkwardly accommodated. I owe the clear formulation of this point to Alexei Boronine, who makes an excellent case for it in this article.
My feeling is that roughly two thirds of the work of a template resides in its presentation logic, while the other third deals with generic issues such as concatenating strings, closing tags, escaping special characters, and so on. This is the two-faced low level nature of generating HTML views. Templating systems deal appropriately with the second half, but they don’t fare well with the first. Logic-less templates flat out turn their back on this problem, forcing you to solve it awkwardly. Other template systems suffer because they truly need to provide a non-trivial programming language so their users can actually write presentation logic.
To sum up; declarative templating tools suffer because:
  • If they were to unfold from their problem domain, they would have to provide ways to generate logical patterns;
  • A DSL that provides logic is not really a DSL, but a programming language. Note that other domains, like configuration management, also suffer from lack of “unfoldance.”
I would like to close the critique with an argument that is logically disconnected from the thread of this article, but deeply resonates with its emotional core: We have limited time to learn. Life is short, and on top of that, we need to work. In the face of our limitations, we need to spend our time learning things that will be useful and withstand time, even in the face of fast changing technology. That is why I exhort you to use tools that don’t just provide a solution but actually shed a bright light on the domain of its own applicability. RDBs teach you about data, and Unix teaches you about OS concepts, but with unsatisfactory tools that don’t unfold, I’ve always felt I was learning the intricacies of a sub-optimal solution while remaining in the dark about the nature of problem it intends to solve.
The heuristic I suggest you to consider is, value tools that illuminate their problem domain, instead of tools that obscure their problem domain behind purported features.

The Twin Approach

To overcome the two problems of declarative programming, which I have presented here, I propose a twin approach:
  • Use a data structure domain specific language (dsDSL), to overcome separateness.
  • Create a high level that unfolds from the lower level, to overcome the complexity gap.

dsDSL

A data structure DSL (dsDSL) is a DSL that is built with the data structures of a programming language. The core idea is to use the basic data structures you have available, such as strings, numbers, arrays, objects and functions, and combine them to create abstractions to deal with a specific domain.
We want to keep the power of declaring structures or actions (high level) without having to specify the patterns that implement these constructs (low level). We want to overcome the separateness between the DSL and our programming language so that we are free to use the full power of a programming language whenever we need it. This is not only possible but straightforward through dsDSLs.
If you asked me a year ago, I would have thought that the concept of dsDSL was novel, then one day, I realized that JSON itself was a perfect example of this approach! A parsed JSON object consists of data structures that declaratively represent data entries in order to get the advantages of the DSL while also making it easy to parse and handle from within a programming language. (There might be other dsDSLs out there, but so far I haven’t come across any. If you know of one, I would really appreciate your mentioning it in the comments section.)
Like JSON, a dsDSL has the following attributes:
  1. It consists of a very small set of functions: JSON has two main functions, parse and stringify.
  2. Its functions most commonly receive complex and recursive arguments: a parsed JSON is an array, or object, which usually contains further arrays and objects inside.
  3. The inputs to these functions conform to very specific forms: JSON has an explicit and strictly enforced validation schema to tell valid from invalid structures.
  4. Both the inputs and the outputs of these functions can be contained and generated by a programming language without a separate syntax.
But dsDSLs go beyond JSON in many ways. Let’s create a dsDSL for generating HTML using Javascript. Later I will touch on the issue of whether this approach may be extended to other languages (spoiler: It can definitely be done in Ruby and Python, but probably not in C).
HTML is a markup language composed of tags delimited by angle brackets (< and >). These tags may have optional attributes and contents. Attributes are simply a list of key/value attributes, and contents may be either text or other tags. Both attributes and contents are optionals for any given tag. I’m simplifying somewhat, but it is accurate.
A straightforward way to represent an HTML tag in a dsDSL is by using an array with three elements: - Tag: a string. - Attributes: an object (of the plain, key/value type) or undefined (if no attributes are necessary). - Contents: a string (text), an array (another tag) or undefined (if there’s no contents).
For example, <a href="views">Index</a> can be written as ['a', {href: 'views'}, 'Index'].
If we want to embed this anchor element into a div with class links, we can write: ['div', {class: 'links'}, ['a', {href: 'views'}, 'Index']].
To list several html tags at the same level, we can wrap them in an array:
[
   ['h1', 'Hello!'],
   ['a', {href: 'views'}, 'Index']
]
The same principle may be applied to creating multiple tags within a tag:
['body', [
   ['h1', 'Hello!'],
   ['a', {href: 'views'}, 'Index']
]]
Of course, this dsDSL won’t get us far if we don’t generate HTML from it. We need a generate function which will take our dsDSL and yield a string with HTML. So if we run generate (['a', {href: 'views'}, 'Index']), we will get the string <a href="views">Index</a>.
The idea behind any DSL is to specify a few constructs with a specific structure which is then passed to a function. In this case, the structure that makes up the dsDSL is this array, which has one to three elements; these arrays have a specific structure. If generate thoroughly validates its input (and it is both easy and important to thoroughly validate input, since these validation rules are the precise analog of a DSL’s syntax), it will tell you exactly where you went wrong with your input. After a while, you’ll start to recognize what distinguishes a valid structure in a dsDSL, and this structure will be highly suggestive of the underlying thing it generates.
Now, what are the merits of a dsDSL in contraposition to a DSL?
  • A dsDSL is an integral part of your code. It leads to lower line counts, file counts, and an overall reduction of overhead.
  • dsDSLs are easy to parse (hence easier to implement and modify). Parsing is merely iterating through the elements of an array or object. Likewise, dsDSLs are comparatively easy to design because instead of creating a new syntax (that everybody will hate) you can stick with the syntax of your programming language (which everybody hates but at least they already know it).
  • A dsDSL has all the power of a programming language. This means that a dsDSL, when properly employed, has the advantage of both a high and a low level tool.
Now, the last claim is a strong one, so I’m going to spend the rest of this section supporting it. What do I mean by properly employed? To see this in action, let’s consider an example in which we want to construct a table to display the information from an array named DATA.
var DATA = [
   {id: 1, description: 'Product 1', price: 20,  onSale: true,  categories: ['a']},
   {id: 2, description: 'Product 2', price: 60,  onSale: false, categories: ['b']},
   {id: 3, description: 'Product 3', price: 120, onSale: false, categories: ['a', 'c']},
   {id: 4, description: 'Product 4', price: 45,  onSale: true,  categories: ['a', 'b']}
]
In a real application, DATA will be generated dynamically from a database query.
Moreover, we have a FILTER variable which, when initialized, will be an array with the categories we want to display.
We want our table to:
  • Display table headers.
  • For each product, show the fields: description, price and categories.
  • Don’t print the id field, but add it as an id attribute for each row. ALTERNATE VERSION: Add an id attribute to each tr element.
  • Place a class onSale if the product is on sale.
  • Sort the products by descending price.
  • Filter certain products by category. If FILTER is an empty array, we will display all products. Otherwise, we will only display the products where the category of the product is contained within FILTER.
We can create the presentation logic that matches this requirement in ~20 lines of code:
function drawTable (DATA, FILTER) {

   var printableFields = ['description', 'price', 'categories'];

   DATA.sort (function (a, b) {return a.price - b.price});

   return ['table', [
      ['tr', dale.do (printableFields, function (field) {
         return ['th', field];
      })],
      dale.do (DATA, function (product) {
         var matches = (! FILTER || FILTER.length === 0) || dale.stop (product.categories, true, function (category) {
            return FILTER.indexOf (category) !== -1;
         });

         return matches === false ? [] : ['tr', {
            id: product.id,
            class: product.onSale ? 'onsale' : undefined
         }, dale.do (printableFields, function (field) {
            return ['td', product [field]];
         })];
      })
   ]];
}
I concede this is not a straightforward example, however, it represents a fairly simple view of the four basic functions of persistent storage, also known as CRUD. Any non-trivial web application will have views that are more complex than this.
Let’s now see what this code is doing. First, it defines a function, drawTable, to contain the presentation logic of drawing the product table. This function receives DATA and FILTER as parameters, so it can be used for different data sets and filters. drawTable fulfills the double role of partial and helper.
   var drawTable = function (DATA, FILTER) {
The inner variable, printableFields, is the only place where you need to specify which fields are printable ones, avoiding repetition and inconsistencies in the face of changing requirements.
   var printableFields = ['description', 'price', 'categories'];
We then sort DATA according to the price of its products. Notice that different and more complex sort criteria would be straightforward to implement since we have the entire programming language at our disposal.
   DATA.sort (function (a, b) {return a.price - b.price});
Here we return an object literal; an array which contains table as its first element and its contents as the second. This is the dsDSL representation of the <table> we want to create.
   return ['table', [
We now create a row with the table headers. To create its contents, we use dale.do which is a function like Array.map, but one that also works for objects. We will iterate printableFields and generate table headers for each of them:
      ['tr', dale.do (printableFields, function (field) {
         return ['th', field];
      })],
Notice that we have just implemented iteration, the workhorse of HTML generation, and we didn’t need any DSL constructs; we only needed a function to iterate a data structure and return dsDSLs. A similar native, or user-implemented function, would have done the trick as well.
Now iterate through the products contained in DATA.
      dale.do (DATA, function (product) {
We check whether this product is left out by FILTER. If FILTER is empty, we will print the product. If FILTER is not empty, we will iterate through the categories of the product until we find one that is contained within FILTER. We do this using dale.stop.
         var matches = (! FILTER || FILTER.length === 0) || dale.stop (product.categories, true, function (category) {
            return FILTER.indexOf (category) !== -1;
         });
Notice the intricacy of the conditional; it is precisely tailored to our requirement and we have total freedom for expressing it because we are in a programming language rather than a DSL.
If matches is false, we return an empty array (so we don’t print this product). Otherwise, we return a <tr> with its proper id and class and we iterate through printableFields to, well, print the fields.

         return matches === false ? [] : ['tr', {
            id: product.id,
            class: product.onSale ? 'onsale' : undefined
         }, dale.do (printableFields, function (field) {
            return ['td', product [field]];
Of course we close everything that we opened. Isn’t syntax fun?
         })];
      })
   ]];
}
Now, how do we incorporate this table into a wider context? We write a function named drawAll that will invoke all functions that generate the views. Apart from drawTable, we might also have drawHeaderdrawFooter and other comparable functions, all of which will return dsDSLs.
var drawAll = function () {
   return generate ([
      drawHeader (),
      drawTable (DATA, FILTER),
      drawFooter ()
   ]);
}
If you don’t like how the above code looks, nothing I say will convince you. This is a dsDSL at its best. You might as well stop reading the article (and drop a mean comment too because you’ve earned the right to do so if you’ve made it this far!). But seriously, if the code above doesn’t strike you as elegant, nothing else in this article will.
For those who are still with me, I would like to go back to the main claim of this section, which is that a dsDSL has the advantages of both the high and the low level:
  • The advantage of the low level resides in writing code whenever we want, getting out of the straightjacket of the DSL.
  • The advantage of the high level resides in using literals that represent what we want to declare and letting the functions of the tool convert that into the desired end state (in this case, a string with HTML).
But how is this truly different from purely imperative code? I think ultimately the elegance of the dsDSL approach boils down to the fact that code written in this way mostly consists of expressions, instead of statements. More precisely, code that uses a dsDSL is almost entirely composed of:
  • Literals that map to lower level structures.
  • Function invocations or lambdas within those literal structures that return structures of the same kind.
Code that consists mostly of expressions and which encapsulate most statements within functions is extremely succinct because all patterns of repetition can be easily abstracted. You can write arbitrary code as long as that code returns a literal that conforms to a very specific, non-arbitrary form.
A further characteristic of dsDSLs (which we don’t have time to explore here) is the possibility of using types to increase the richness and succinctness of the literal structures. I will expound on this issue on a future article.
Might it be possible to create dsDSLs beyond Javascript, the One True Language? I think that it is, indeed, possible, as long as the language supports:
  • Literals for: arrays, objects (associative arrays), function invocations, and lambdas.
  • Runtime type detection
  • Polymorphism and dynamic return types
I think this means that dsDSLs are tenable in any modern dynamic language (i.e.: Ruby, Python, Perl, PHP), but probably not in C or Java.

Walk, Then Slide: How To Unfold The High From The Low

In this section I will attempt to show a way for unfolding a high level tool from its domain. In a nutshell, the approach consists of the following steps
  1. Take two to four problems that are representative instances of a problem domain. These problems should be real. Unfolding the high level from the low one is a problem of induction, so you need real data to come up with representative solutions.
  2. Solve the problems with no tool in the most straightforward way possible.
  3. Stand back, take a good look at your solutions, and notice the common patterns among them.
  4. Find the patterns of representation (high level).
  5. Find the patterns of generation (low level).
  6. Solve the same problems with your high level layer and verify that the solutions are indeed correct.
  7. If you feel that you can easily represent all the problems with your patterns of representation, and the generation patterns for each of these instances produce correct implementations, you’re done. Otherwise, go back to the drawing board.
  8. If new problems appear, solve them with the tool and modify it accordingly.
  9. The tool should converge asymptotically to a finished state, no matter how many problems it solves. In other words, the complexity of the tool should remain constant, rather than growing with the amount of problems it solves.
Now, what the hell are patterns of representation and patterns of generation? I’m glad you asked. The patterns of representation are the patterns in which you should be able to express a problem that belongs to the domain that concerns your tool. It is an alphabet of structures that allows you to write any pattern you might wish to express within its domain of applicability. In a DSL, these would be the production rules. Let’s go back to our dsDSL for generating HTML.
The humble HTML tag is a good example of patterns of representation. Let’s take a closer look at these basic patterns.The humble HTML tag is a good example of patterns of representation. Let’s take a closer look at these basic patterns.
The patterns of representation for HTML are the following:
  • A single tag: ['TAG']
  • A single tag with attributes: ['TAG', {attribute1: value1, attribute2: value2, ...}]
  • A single tag with contents: ['TAG', 'CONTENTS']
  • A single tag with both attributes and contents: ['TAG', {attribute1: value1, ...}, 'CONTENTS']
  • A single tag with another tag inside: ['TAG1', ['TAG2', ...]]
  • A group of tags (standalone or inside another tag): [['TAG1', ...], ['TAG2', ...]]
  • Depending on a condition, place a tag or no tag: condition ? ['TAG', ...] : [] / Depending on a condition, place an attribute or no attribute: ['TAG', {class: condition ? 'someClass': undefined}, ...]
These instances can be represented with the dsDSL notation we determined in the previous section. And this is all you need to represent any HTML you might need. More sophisticated patterns, such as conditional iteration through an object to generate a table, may be implemented with functions that return the patterns of representation above, and these patterns map directly to HTML tags.
If the patterns of representation are the structures you use to express what you want, the patterns of generation are the structures your tool will use to convert patterns of representation into the lower level structures. For HTML, these are the following:
  • Validate the input (this is actually is an universal pattern of generation).
  • Open and close tags (but not the void tags, like <input>, which are self-closing).
  • Place attributes and contents, escaping special characters (but not the contents of the <style> and <script> tags).
Believe it or not, these are the patterns you need to create an unfolding dsDSL layer that generates HTML. Similar patterns can be found for generating CSS. In fact, lith does both, in ~250 lines of code.
One last question remains to be answered: What do I mean by walk, then slide? When we deal with a problem domain, we want to use a tool that delivers us from the nasty details of that domain. In other words, we want to sweep the low level under the rug, the faster the better. The walk, then slide approach proposes exactly the opposite: spend some time on the low level. Embrace its quirks, and understand which are essential and which can be avoided in the face of a set of real, varied, and useful problems.
After walking in the low level for some time and solving useful problems, you will have a sufficiently deep understanding of their domain. The patterns of representation and generation will then arise naturally; they are wholly derived from the nature of the problem they intend to solve. You can then write code that employs them. If they work, you will be able to slide through problems where you recently had to walk through them. Sliding means many things; it implies speed, precision and lack of friction. Maybe more importantly, this quality can be felt; when solving problems with this tool, do you feel like you’re walking through the problem, or do you feel that you’re sliding through it?
Maybe the most important thing about an unfolded tool is not the fact that it frees us from having to deal with the low level. Rather, by capturing the empiric patterns of repetition in the low level, a good high level tool allows us to understand fully the domain of applicability.
An unfolded tool will not just solve a problem - it will enlighten you about the problem’s structure.
So, don’t run away from a worthy problem. First walk around it, then slide through it.
This article was written by FEDERICO PEREIRO, a Toptal developer