Why JavaScript and PHP rule the web

JavaScript and PHP are excellent languages. It took some time for me to really understand their power, and I hope to clarify my point of view in this post.

I believe that those languages shine and survive for being permissive and by adopting an Assumption Inversion Principle (AIP) — I’ve made up that name, you may suggest a better one in the comments. 🙂

What is AIP?

Raw JavaScript does not have any module definition structure, no statements as “import” or “include”. Symbols available for a piece of code written in JavaScript are pure assumptions. Those assumptions must be satisfied by other pieces of code, and not by itself. It is incredibly easy to change behaviours by loading new script files before the one to be changed by changing the link of its assumptions. This “change by adding” relates directly with the “O” in the SOLID principles.

Old PHP code goes the same way, your piece of code has some assumptions that some symbols are available and you simply use it. It is not the responsibility of your piece of code to link those symbols, you normally end up putting all of your script loading in something like a bootstrap system.

Import statements are bad, because they hardwire the symbols in your piece of code. Then if you need to change something, you actually need to get into the file and change it. In my opinion, “change by changing” is more prone to errors than “change by adding”. Also, “change by changing” does not scale as well as “change by adding”.

JavaScript and PHP code can also inspect its environment and adapt. So it’s possible to keep adding files that keep changing stuff depending on what has already changed or what is currently available for the script. This is pure gold in software development.

Think about WordPress plugins, you don’t need touch a single line of the core code, you just drop a new file and you can get a whole new experience out of your web site. Some plugins add features, other plugins only change behaviours. So, it either “add by adding” or “change by adding.” It’s a win-win situation, core code is hardly changed to add a feature, so you won’t get any bug from features you don’t care. For the features that you care, you get the plugins, drop them in and live with the bugs, if you happen to find a scary security bug, just disable the plugin.

I know a lot of people who criticize this way of developing and would prefer that WordPress be more object-oriented than it currently is. For what? Then we would need a plethora of new abstractions to achieve the same stuff. As code is a liability, the less code we need to achieve something, the better.

I love Haskell and its type system, it’s phenomenal. But it suffers from the same problem as other languages like Java and C# suffer. Modules are hard-wired with their dependencies and code can’t inspect its environment. I’m not saying that those languages are bad and should not be used. They surely has their own good use cases.

For a fast and distributed development effort as the web is, we simply can’t afford anything that is unforgiving or strict. The code needs to be open for changes from the outside (change by adding).

So, for the web, I don’t like new PHP code that uses modules and neither the usage of ES6. They may feel faster for developers starting out new products, but my bet is that in the long run they end up killing productivity and/or quality.

Think about it for the current project you are working on, how much could you change of it by only adding new files? What could you deliver by adding a new file: a whole new story, a feature, a bug fix or nothing?

All this permissive environment with all these code based on assumptions can turn into a real mess. That’s the reason those languages are hated by some developers, then those developers try to fix it by creating new stricter languages for the web, the end of the story is always the same: JavaScript/PHP triumphs over all. Why? Is it because of AIP?

JavaScript and PHP has some hidden properties that newcomers tend to ignore. Those hidden properties make them a perfect fit for the web. What do you think?

Am I just an old-school guy talking bullshit? Please, be kind. 🙂

Haskell is just some steps away from Java

Start with Java, then:

  1. Forbid using null;
  2. Use only immutable objects, add “final” modifier to everything;
  3. Swap methods by static functions with the the original “this” as the first argument, e.g. “foo.bar()” turns into “bar(foo)”;
  4. Add a lot of features to the type system;
  5. Remove type annotations, i.e. “Foo bar(Foo self)” turns into “bar(self)”;
  6. Remove useless parens, i.e. “bar(foo)” turns into “bar foo”;
  7. Add call-by-need evaluation;
  8. Done, you have Haskell.

It’s not that hard, is it?

One, using null references is a recognized bad practice, see “Null References: The Billion Dollar Mistake.” Java 8 already provides the Optional type to stop using nulls.

Two, immutable objects are a win strategy, see posts by Hirondelle Systems, IBM, Yegor, and others.

Three, as you only have immutable objects, there is no reason to use methods instead of static functions, considering you maintain polymorphism (not quite the case for Java, but for the sake of this rant, consider as if it has this feature).

Four, improve the type system. The type system used by Java language misses a lot of features. If you don’t feel it, just consider this as an added bonus. When you start using the type system features in your favor, you end up with much better code.

Five, imagine the Java compiler could infer the type of the arguments, so you don’t need to type them everywhere. You still have the same static typing language, you just don’t need to write the types. Shorter code means less liability to haunt you.

Six, why all those parens? Just drop them. Less code to write, hurray!

Seven, call-by-need just makes a lot of things easier (also makes a lot of things harder), but I really think it is a winner when you talk about productivity. When coding, I feel it a lot easier to express things in terms of values instead of steps (mathematicians have been doing this since long before computers). Expressing things in terms of values in a universe without call-by-need will result in a lot of useless computations, so call-by-need is a must.

Eight, done! This is Haskell. No functors, monads, arrows, categories or lists needed.

Why this post? Well, I don’t know. It just occurred to me that if you really go into following good coding practices in Java (e.g. avoid null, use immutable objects), you will eventually feel familiar with functional code. Add some more things to the mix, and you end up with Haskell. I think people feel a bit scared at first contact with Haskell (and family) because of the academic and mathematical atmosphere it has, but in the end it is just a lot of good practices that you are kind of required to comply with.

Code infected with exceptions

I believe that code infected with exceptions is bad. Imagine the following harmless code:

public static Foo newFoo(int bar) {
  switch (bar) {
    case 1: 
      return new FooOne();

    case 2: 
      return new FooTwo();

      throw new IllegalArgumentException("invalid bar code: " + bar);

You use it in the middle of your business code with ease. The code compiles and everybody is happy:

public static List getAllBazFoo() {
  List result = new ArrayList();
  List allBaz = BazDAO.getAllBaz();
  for (Baz baz : allBaz) {
    int id = baz.getId();
    int bar = baz.getBar();
    Foo foo = Foo.newFoo(bar);
    BazFoo e = new BazFoo(id, foo);
  return result;

The problem is that the call Foo.newFoo(bar) can throw an IllegalArgumentException and when this happens you will have no clue of which Baz was invalid.

To address this and make the message more useful, we have to remember to add a try/catch in the code:

public static List getAllBazFoo() {
  List result = new ArrayList();
  List allBaz = BazDAO.getAllBaz();
  for (Baz baz : allBaz) {
    int id = baz.getId();
    int bar = baz.getBar();
    Foo foo;
    try {
      foo = Foo.newFoo(bar);
    } catch (IllegalArgumentException e) {
      throw new IllegalArgumentException("invalid baz: " + id, e);
    BazFoo e = new BazFoo(id, foo);
  return result;

Add multiple layers of abstraction to your application and you’r ready: spaghetti with an exceptional taste. You add an exception at some point and you will have to review the entire stack of abstractions you have to ensure that the exception does no harm to anyone and has a useful message for future maintenance.

Haskell solves much of the need for exceptions using returns that symbolize failures, e.g. Maybe and Either, and the use of Monad eliminates the need to check the results at each step.

However, there are exceptions in Haskell, why? I can not understand the reason for preferring exceptions rather than special results. In which situations is it better to use exceptions? They look like a modern goto to me.

C, C++, Java, Haskell?

I’m excited and distressed with my OpenGL adventure. At the same time that learning OpenGL makes me pleased, it also makes me irritated: I’ve dedicated much time learning Haskell and got involved in a way to really like the language and the community. But, doing something in OpenGL was very hard and lonely. There are very few content about it and the majority are already obsolete.

I did put Haskell at the corner for some time and refreshed my C knowledge to learn modern OpenGL, with shaders and buffer objects. The reason was simple: I didn’t manage to do anything in Haskell that actually worked and had no clue if the problem was with my code or with the libraries I was using.

Well, to make a moving camera in a 3D world was difficult, even in C. But, I’ve found lots of good stuff on the internet to help me do it. Using all this help, I’ve managed to make it work.

I’ve decided to get back to Haskell to apply this knowledge I just got.

Oh dear! To make everything fit together and with the right types made me sweat some t-shirts. If I wasn’t so stubborn, I wouldn’t do it. I’ve switched OpenGL by OpenGLRaw because the OpenGL package has no binding for “glUniformMatrix*”; also switched GLFW by GLFW-b because the first one has a dependency with OpenGL package.

I read a post at InfoQ asking if the C language was still suitable. Made me think on lots of stuff, including on to abandon Haskell and get back to pointers, manual memory management, hairy strings, …

Perhaps it’s because I’m new to this world, but I feel kind of unproductive while programming in Haskell. My impression is that, unless it’s something very well consolidated and used by lots of community members, a lot of time will be spent going after libraries that usually are just a binding for one written in C. When that’s not the case, just to take a breath of the library’s API, I need to read some papers trying to remember all the magic about Functors and Monads (I think I’ll need some books to really understand Arrows!).

To understand pointers and memory management in C was difficult too. But back then I was a young boy at the world of programming, my knowledge was very limited, it was based a lot on punching the code until it did what I wanted. I can’t accept this anymore, I need to understand what I’m doing.

I remember I had good reasons to change C for Java. After a while using Java, I started to see the cons of the language too. My undergraduate work was dedicated to point the reasons of this change and the promising future of languages like Haskell.

I believe that this negative feeling happens with every language that you spend some time learning. The propaganda and the first steps are wonderful. You imagine a new world where everyone is rich and can enjoy all the beauties of life. With time, you realize that it’s not like that: every decision the language has made means positive and negative points to us, programmers. I’ve experienced this with C, Python, Java and now with Haskell.

Java has chosen the O.O. way: even if it’s a static function or a global variable, the class wrap will be there. Haskell picked the pure functional way: even I/O operations are wrapped within a monad.

This is not bad at all. The use of monads to model sequential computation was a great discovery that led to a lot of other solutions. All the choices made brings with them some pros and some cons. (Just take caution with the extremism [e.g. Singleton].)

I’m distressed because I don’t know what to pick:

  • C looks flexible but with complexities I’m not wishing to have. C11 may have fixed some (e.g. ), but GCC has no support for it yet;
  • I feel productive in Java but the inherent overhead and the need of a JVM makes me angry;
  • C++ I don’t know well, but I think I’d prefer going into C than this one. Java has shown me good reasons to say no to O.O. But, even GCC is using C++, maybe I’m missing something. The libraries for C++ looks great (e.g. GLM, Boost);
  • Haskell is my favorite, but it’s starting to bother me little, this feeling of being stuck, without delivering anything, makes me tense. Maybe I need to give some more time for it to really kick in. I don’t know if this commitment will worth it or I could make a better use of it enhancing my skills in another language.

The performance difference between C and Haskell don’t bother me much, because I’m not going to do micro-optimizations so early.

The ultimate answer is something like: it depends on your project, choose the right language for the right project. Well, I have no project at all, but I want to get deeper into game development.

What should I do?

DuDuHoX with sound: OpenAL

Working on DuDuHoX is increasingly difficult. I’ve thought about giving up on some problems I had, either with code or libraries.

Putting animations into the game was too complicated. The engine I did has no intermediate state, i.e. the player can only be in a single square at any given time. There’s no data representating the player being half on a square and half on another.

My solution to this was to put the animation data in the interface code (OpenGL). After hitting a key that moves the player, the engine will be in the complete new state, but the interface will block input until it finishes showing the moving animation.

Another big problem I has was when I decided to put sounds in the game. My goals are: background music and positional sounds. I’ve found the OpenAL library for that. It’s made in C and there’s a Haskell package that makes the binding.

But, everything is a surprise. To install OpenAL in Windows is real complicated. I’ve spent an entire afternoon trying to do it. When I finally did it, I realized that the ALUT package was also needed. This one was easy to install, but then I started receiving link problems during compilation. Good thing I’ve found a GHC bug reporting exactly what I was going thru: Ticket 1243. After patching, everything went fine.

I made a music in OpenMPT, drew the player in GIMP and put it into the game.

Despite the difficulties, I’m very happy with the result I’m getting by doing DuDuHoX in Haskell. I’ve recorded a gameplay with CamStudio to share with you:

OpenGL in Haskell

I’m working on a maze game to learn Haskell. It’s name: DuDuHoX. The reason is explained in my portuguese blog.

The first game interface is for console. It uses the package System.Console.ANSI. Works pretty well in Linux and Mac OS, except for Windows which has a small buffer problem, forcing the player to hit enter after every move.

DuDuHoX versão console

I decided to create a new interface for the game this week. With graphics and with no buffer problem for Windows.

At first, I tried to use SDL, because I’ve used it to create a maze game in C. Had no success, couldn’t even compile an example.

Thought about using OpenGL directly for graphics, but didn’t know what to use to manage user input. I’ve seen some complaints about OpenGL and GLUT in this regard. After some searches, I found GLFW, a C open-source library that do everything I need, and the most important: there is a GLFW Haskell package that does the binding.

After some hours learning OpenGL and GLFW in Haskell, this is the result:

DuDuHoX versão gráfica

You can play with this new interface, but the console one has more features. So, for the next weeks, I’ll try to add these features and some nice textures to the OpenGL interface. The TODO list and the code is available at GitHub: https://github.com/thiago-negri/DuDuHoX.

Happy 2013!

Useful pure functional programming

I guess all programmers used to the imperative model gets stunned wondering how crazy the pure functional model is. After all, how could we do anything useful without side effects, printing to the terminal, opening a window, producing logs and variables? OMG.

I did get stunned with Haskell and because of a happy insistency I realized that the problem was in my head and not in the pure functional model.


Living for a long time in the context of an imperative world made me get used to think in a specific sequential way. I always needed to tell the computer how to do every computational step, what states to hold, what variables to update, when to return some value, etc.

On the other hand, in the pure functional world, I’m forced to think in a way to transform data. Instead of thinking on each step, now I need to think how to transform some data into another, i.e. how to extract the thing I want from the thing I have.

The classical examples to show this difference involves list manipulation.

Example #1

To calculate the sum of a list in the imperative style, you need to control the current sum state, which needs to be updated at each element.

int sum(int[] list) {
  int result = 0;
  for (int i : list)
    result += i;
  return result;

In the functional world, you need to transform a list into a single number that represents the sum of all elements, i.e. the sum of an empty list will always be zero and the sum of any other list can be extracted by adding its first value and the sum of the rest of the list.

sum [] = 0
sum (x:xs) = x + sum xs

Example #2

When we need to maintain control over two lists in the imperative world, one for input and one for output, we need to include the current index of each list in the “bag of things we need to reason about in order to avoid silly errors”.

String[] toText(int[] list) {
    String[] result = new String[list.length];
    for (int i = 0; i < list.length; ++i)
        result[i] = Integer.toString(list[i]);
    return result;

The same example in the functional world stays simple as the sum of the list, the only difference is that we transform the list using the list constructor instead of the plus operator.

toText [] = []
toText (x:xs) = show x : toText xs


The pure functional model can do many more things beyond these classical examples in a way that I consider to be pretty and with free benefits. This post is yet another try to demonstrate this. As I’m quite new to the pure functional world and have no profound knowledge of Monads, Functors, Arrows and all that stuff that looks brilliant on the hand of those who knows how to use it, I guess this post will be easy to follow, as long as you have a basic understanding of Haskell.


We’ll create a calculator in Haskell. All code shown will be pure, with no side effects, including a DSL to write tests, the test executor itself and two other test case transformations, one to write the test in text form and one to write the same text in JUnit form.

Test cases

First, let’s define the test cases’ format. They are a sequence of actions and assertions:

data TestSequence =
    Do Action TestSequence
  | Check Assertion TestSequence
  | Done

With that data definition, the test cases can be written like this:

test =
  Do a.
  Do b.
  Do c.
  Check d$

The need to write “Done” at the end of the test case bothers me. To avoid that, we can create a type synonym to open-ended tests:

type Test = TestSequence -> TestSequence

Now the tests can be written as:

test =
  Do a.
  Do b.
  Do c.
  Check d

To me, this is a valid small DSL to write our tests cases.

The only action available to our user will be to push the buttons and the only information that could be checked is the display:

data Action =
    Press Button

data Assertion =
    DisplayHasNumber Int

data Button =
  | One
  | Two
  | Three
  | Four
  | Five
  | Six
  | Seven
  | Eight
  | Nine
  | Plus
  | Minus
  | Times
  | Divide
  | Equals
  | Clear
    deriving (Show)

First test

The first step is done. We already have a DSL to write tests that type-check. Time to write our first test:

sample :: Test
sample =
    Do (Press One).
    Do (Press Plus).
    Do (Press One).
    Do (Press Equals).
    Check (DisplayHasNumber 2).

    Do (Press Clear).
    Check (DisplayHasNumber 0).

    Do (Press Two).
    Do (Press Zero).
    Check (DisplayHasNumber 20).
    Do (Press Divide).
    Do (Press Two).
    Check (DisplayHasNumber 2).
    Do (Press Equals).
    Check (DisplayHasNumber 10)

Test transformation

Note that the test case is actually a data structure, giving us the possibility to transform it. To make the transformation easier, let’s create a function that applies some generic function to each step, creating a list with the results:

unroll :: (TestSequence -> a) -> Test -> [a]
unroll f t = g (t Done)
  where g Done = [f Done]
        g v@(Do _ next) = f v : g next
        g v@(Check _ next) = f v : g next

Test case -> Text

We can transform the test case into a text:

prettyPrint :: Test -> String
prettyPrint = unlines . unroll prettyPrintTestSequence

prettyPrintTestSequence :: TestSequence -> String
prettyPrintTestSequence s =
    case s of
      Done              -> "end"
      Do action _       -> prettyPrintAction action
      Check assertion _ -> prettyPrintAssertion assertion

prettyPrintAction :: Action -> String
prettyPrintAction (Press button) = 
    "press " ++ prettyPrintButton button

prettyPrintButton :: Button -> String
prettyPrintButton = map toLower . show

prettyPrintAssertion :: Assertion -> String
prettyPrintAssertion (DisplayHasNumber number) = 
    "the display should be showing the number " ++ 
    show number

Let’s try in GHCi:

\> putStr $ prettyPrint sample
press one
press plus
press one
press equals
the display should be showing the number 2
press clear
the display should be showing the number 0
press two
press zero
the display should be showing the number 20
press divide
press two
the display should be showing the number 2
press equals
the display should be showing the number 10

Test case -> JUnit

We can translate our test case into JUnit:

generateJUnit :: Test -> String
generateJUnit = 
    ("@Test\npublic void test() {\n" ++) . 
    unlines .
    unroll generateJUnitTestSequence

generateJUnitTestSequence :: TestSequence -> String
generateJUnitTestSequence s =
    case s of
      Done      -> "}"
      Do a _    -> generateJUnitAction a
      Check a _ -> generateJUnitAssertion a

generateJUnitAction :: Action -> String
generateJUnitAction (Press b) =
    generateJUnitButton b ++ ".press();"

generateJUnitButton :: Button -> String
generateJUnitButton b = "getButton" ++ show b ++ "()"

generateJUnitAssertion :: Assertion -> String
generateJUnitAssertion (DisplayHasNumber n) =
    "assertEquals(" ++ show n ++ ", getDisplayNumber());"

At GHCi:

\> putStr $ generateJUnit sample
public void test() {
assertEquals(2, getDisplayNumber());
assertEquals(0, getDisplayNumber());
assertEquals(20, getDisplayNumber());
assertEquals(2, getDisplayNumber());
assertEquals(10, getDisplayNumber());

Concrete test

And of course, we can use our test case data structure to actually test a calculator implementation:

data TestResult =
  | Failed FailureMessage

type FailureMessage = String

instance Show TestResult where
    show Ok = "Test passed"
    show (Failed m) = "Test failed: " ++ m

checkTest :: Test -> TestResult
checkTest t = 
    (threadCheckState . unroll checkTestSequence $ t)

threadCheckState :: [State Calculator TestResult] ->
                    State Calculator TestResult 
threadCheckState = go 0
  where go _ [] = return Ok
        go n (x:xs) = x >>= (f n xs)
        f n xs Ok = go (n + 1) xs
        f n _ (Failed m) = 
          return . Failed $
            "Step " ++ show n ++ ". " ++ m

checkTestSequence :: TestSequence ->
                     State Calculator TestResult
checkTestSequence Done = return Ok
checkTestSequence (Do a _) = checkAction a
checkTestSequence (Check a _) = checkAssertion a

checkAction :: Action -> State Calculator TestResult
checkAction (Press b) = do
    modify $ pressButton b 
    return Ok

checkAssertion :: Assertion -> State Calculator TestResult
checkAssertion (DisplayHasNumber n) =
    get >>= \c ->
      if displayNumber c == n
        then return Ok
        else return . Failed $ 
          "Wrong number in display, should be " ++
          show n ++ " but was " ++ show (displayNumber c)

Calculator’s core

Finally, to type-check our tester, we need to define a first version of our calculator. Here is your opportunity to try your own. I’ve produced the following code to pass the test:

data Calculator = Calculator {
    displayNumber :: Int
  , operation :: Maybe (Int -> Int -> Int)
  , savedNumber :: Int

pressButton :: Button -> Calculator -> Calculator
pressButton b =
  case b of
    Zero    -> appendNumber 0
    One     -> appendNumber 1
    Two     -> appendNumber 2
    Three   -> appendNumber 3
    Four    -> appendNumber 4
    Five    -> appendNumber 5
    Six     -> appendNumber 6
    Seven   -> appendNumber 7
    Eight   -> appendNumber 8
    Nine    -> appendNumber 9
    Plus    -> saveOperation (+)
    Minus   -> saveOperation (-)
    Times   -> saveOperation (*)
    Divide  -> saveOperation div
    Equals  -> performOperation
    Clear   -> clear

appendNumber :: Int -> Calculator -> Calculator
appendNumber i c = 
  c { 
    displayNumber = (displayNumber c) * 10 + i 

saveOperation :: (Int -> Int -> Int) ->
                 Calculator -> Calculator
saveOperation f c = 
  c { 
    savedNumber = (displayNumber c)
  , displayNumber = 0
  , operation = Just f

performOperation :: Calculator -> Calculator
performOperation c = 
  c { 
    savedNumber = newNumber
  , displayNumber = newNumber 
  where newNumber = 
          case (operation c) of
            Nothing -> displayNumber c
            Just f  -> let a = savedNumber c
                           b = displayNumber c 
                           f a b

clear :: Calculator -> Calculator
clear = const mkCalculator

mkCalculator :: Calculator
mkCalculator = 
  Calculator { 
    displayNumber = 0
  , operation = Nothing
  , savedNumber = 0

Running the tests

Now we can run the tester against our implementation at GHCi, let’s do it:

\> checkTest sample
Test passed

This means our calculator is good enough for our use case. To assure the test case can capture an unexpected behavior, let’s introduce a bug changing the value of the multiplication on the function “appendNumber” from “10” to “100” and run the test again:

\> checkTest sample
Test failed: Step 9. Wrong number in display, should be 20 but was 200



We’ve built a basic code for a working calculator that is good enough for a simple use case. We can add new tests and keep factoring our code, TDD style. All that was made in a 100% pure way, with no dependency upon IO, state, variables or logs.

Beyond the benefit to be able to do any kind of transformation in the test cases, we can also run as many transformations and as many tests we want in parallel, because the test and the calculator are thread-safe by the simple fact that they are pure.

We did this in only 251 lines of code. How many classes or lines are needed to do the same in your favorite language?

I hope this post can enlighten you to think outside the box of the imperative world. There are many advantages in learning a pure functional language, even if you use imperative languages at work.

The code is available at GitHub: https://gist.github.com/3354394.