Wednesday, February 18, 2015

Starting my journey with Volt

2 comments:
Let me start with a disclaimer that I'm new to the Volt framework. I probably get some things wrong. Don't rely on my opinions here :) The reason I'm blogging about it is to document my learning process. Also, by blogging I hope that dear readers will catch my mistakes and correct them in the comments. Thanks to that I'll learn more and can share a better knowledge in the future blog posts :)

I've played with Volt today. For those of you who don't know - Volt is a Ruby web framework. It's a bit unusual, as it's an isomorphic framework. The code is shared between the server and the client. It works in the browser thanks to the Opal - a ruby to javascript compiler (source to source). In a way, it's similar to Meteor.

Opal

I'm quite excited about Opal and the way it can influence the Ruby/Rails community. One year ago, I've recorded my suggestion/prediction that Opal can become the default JavaScript implementation in Rails apps. Here's the short (1 minutes) video of me science-fictioning about it, while walking in the forest near my home :)




I'm not following the Opal progress very closely, but from seeing what's possible in Volt, I feel quite confident about my last-year prediction.

I'll focus on Opal in another blog post. For now, let me come back to Volt.

The wow effect

I came to Rails very early, in 2004. It was due to the "wow effect". Being experienced with PHP, Java, ASP.NET it was amazing to see a result produced in such a short amount of time with so short codebase!

Over time, I learnt a lot about JavaScript frontends. Here is a 12-blog-posts summary of my lessons. I know what it takes to develop a nice Single Page Application (or however you prefer to call it).

This is the Volt wow effect. I'm old enough not be excited too easily with new shiny toys, but Volt does impress me. In almost no time, you've got a rich frontend, which works as you'd expect from a good JavaScript application - fast, interactive. You get the backend basically for free. It all autosyncs with the backend without any effort from your side. What's more, a new Volt app comes with authentication built-in for free.

Part of the wow effect is the fact that you don't need to run any database migrations to add new fields. It all relies on Mongo under the hood, so it's schema-less.

One of my roles is to be a Ruby/Rails teacher  / coach and I know how important it is to have a great "wow effect" for the students.

The developer happiness

Another thing that brought me to Rails was its developer oriented approach. The small things like console, like migrations (at that time it was huge), like seeing everything immediately on the screen without restarting the server. It was all there. It was all prepared to be very productive for the developer.

Volt is the same. There's lots of details that work very well. It's hard to mention them all. For me, it was great to see the browser with live reload out of the box. I change some Ruby code and it immediately pushes the code to the browser and the page is reloaded. I know there are more tools like that, but seeing it work with Ruby in the tool chain is just great.

The documentation

I followed the tutorial (try it, you'll get a better picture of it in 15 minutes, than from reading this post ;) ) and browsed through the documentation. All very precise, very simple. I didn't find any inconsistency with the actual behaviour.

The codebase

I only scanned some parts of the code and it seems like very well written code. I was curious how the "reactive DOM" thing works here. It was easy to find the right piece of code and understand how the "watching" works. When I browse the Rails codebase, I often feel lost as it depends on a lot of metaprogramming. I didn't feel the same here (yet).

BTW, yesterday I tried to run Opal with ReactJS but couldn't make it. There's probably no reason, why it wouldn't work, though, given more time. If you know anyone who did that successfully, I'd love to see the result :)

Components

I'm not experienced enough to see all the details, but I love the focus on components, from the beginning. Often, in Rails apps we think that some pieces may be extracted as an engine/component/module/gem *one day* and it rarely happens. Here, we're encouraged to do it from the beginning. The Volt components can be packaged as gems, which is quite interesting.

The architecture of a Volt app

As much as I'm impressed by Volt being a great framework for scaffolding/prototyping (maybe even better than Rails one day?), I also have some doubts.
A lot of my work in the last years was about decoupling legacy Rails applications. I wrote a book about Refactoring Rails apps and I keep collecting techniques which make it easier. Coupling is bad.

Is sharing the code between server and client a coupling? The answer to this question will be very important for me.

In a way, good decoupling may result in great reusability. There are applications, where the server role is simply to keep the data and to ensure basic authentication/authorisation rules. There are however also apps, where the server logic is much different from the client applications.

Is reusability the goal in itself? I don't think so. It's more of a nice side effect of good separation of concerns.

What's the Volt model of code reuse? At the moment, I'm not really prepared to answer this question easily. It seems to be, that at any moment, you can easily switch off the coupling, when required. That's a good sign. It's all Ruby under the hood, so we can do what we want.

Community

In case of frameworks, it's not always their authors vision, which drives how the apps are created. There's many good things in the Rails framework. Rails itself (nor the Rails core team) doesn't force you to write monolithic application - yet, that's what many people do. The main Volt author (Ryan Stout) seems to be very modularity-oriented. From what I see in the Gitter activities, this vision is shared by the early adopters. That's a good sign.

The future?

Arkency (my company) became experts in dealing with legacy Rails apps. We know how to decouple the app, step by step, in a safe manner. Are we going to become "legacy Volt apps experts" in 5 years from now? Time will tell... I have positive feelings about it. It's great to see new players in the Ruby community. There's no way it will become more popular than Rails, however competition is good. Volt is so different from Rails, that the Merb-case is not vere likely here ;)

I'm not recommending starting serious apps with Volt (yet). However, the framework is mature enough to be tried. In only 15 minutes you can build the Todo app from the tutorial. At least it will show you how the process of working can look, in the future and how different it is from your everyday flow.

As for me, I'm off to start another side-project with Volt now. Follow me on Twitter to see how it goes :)




Wednesday, February 11, 2015

From Rails to JavaScript frontends

No comments:
I've collected a number of my blog posts, which together may help you in the transition from Rails to JavaScript (or CoffeeScript) frontends. Many of the posts are about the mental transition you need to make along the way, while other posts are more technical.

I think it's best to read them in the order I put it here, as they present the road I took over the last 5 years.

1. Rails is not MVC

2. Frontend is a separate application

3. From backend to frontend, a mental transition

4. Is CoffeeScript better than Ruby?

5. JavaScript frontends are like desktop applications

6. Frontend first

7. Non-framework approach to JavaScript applications

8. Angular or Ember? Congratulations, you've already won

9. Single Page Applications - fight

10. Turn a Rails controller into a Single Page Application

11. How can Rails react to the rise of JavaScript frameworks?

12. Which JavaScript framework should I choose? (video)

If you enjoyed reading the blog posts, you may consider following me on Twitter.



Which JavaScript framework should I use? (video)

No comments:
Over the last 4-5 years, I keep being asked this question - Which JavaScript framework should I use?

2.5 years ago, I had a talk at the RuPy conference, where I'm trying to answer this question. I realised, I've never posted the video to my blog. 2.5 years is like a century in the IT world. Everything has changed, right? I believe that my answer is still correct and all what I said is still true.

Enjoy the 30 minutes of my lecture :)


Tuesday, February 3, 2015

Splitting a Rails controller

2 comments:

Splitting a controller into separate files may be a good idea in certain cases.

Let me start with the cases, when it's NOT a good idea. In case of a simple CRUD controller, this may be an overkill. The same applies to simple, scaffold-like controllers.

Some controllers are more resource-oriented - those may benefit from keeping the actions together in one file. People often use filters for reusing certain rules of accessing the resources here.

Other controllers may be more action-oriented (as opposed to resource-oriented). It's typical for the your-main-model-controllers. In those controllers, you will see more than the standard actions. They will contain interesting logic for 'update', 'create', 'destroy'. Those actions may have different logic. Often, they don't share that much.

The action-oriented controllers may benefit from splitting them into several files. I call them SingleActionControllers.

Please, note that the action-oriented controllers can contain a more complicated set of rules in the controller filters. Often, they create a full algebra of filters, where you need to find your way through the :except clauses and dependencies between filters. Refactoring those places requires a really precise way of working.

Apart from the filters problem, extracting a SingleActionController is easy and consists of the following steps:


1. A new route declaration above the previous (first wins)
post 'products' => 'create_product#create' 
resources :products, except: [:create]

2. Create an empty controller CreateProductController which inherits from the previous
3. Copy the action content to the new controller
4. Remove the action from the previous controller
5. Copy the filters/methods that are used by the action to the new controller
6. Make the new controller inherit from the ApplicationController
7. Change routes to add 'except: [:foo_action]'

You can also see a simple example of this refactoring here:

http://rails-refactoring.com/recipes/extract-single-action-controller-class/



TDDing a unit, not a class

1 comment:
Some time ago, I blogged about "Unit tests vs Class tests", where I described how it differs when you think of a unit more as a collection of classes instead of a single class.

How can we TDD such a unit?

This reddit thread triggered me to explain it a bit more. The title of the thread was interesting: "How does one TDD a user and session model?"

Such question is a good example, how the class tests seem to be the default in the "unit testing" thinking. Instead of asking how to TDD the Authentication unit, we're asked how to TDD some single classes - which often depend on each other.

Here is my response:

One way to do it is to think one level higher. Don't focus on the User and Session classes, but focus on the whole feature/module that you're building. In this case, that's probably Authentication, right?

Think of Authentication as a class, which is a facade. It's very likely, that you will come up with User and Session classes, sooner or later. They will be more like implementation details.

You probably need to implement functions like:
  • register
  • logging in
  • logging out
  • change_password
  • change_email
  • change_login
  • remember_me
  • delete_user

Together they can make nice stories. You can start with a simple test like:
def test_wrong_credentials_dont_authenticate 
  authentication = Authentication.new 
  authentication.register_user("user", "password") 
  assert_raises(WrongCredentials) { authentication.authenticate("user", "wrong password") } 
end 


(if you don't like exceptions, you can turn it into returning true/false instead)

One by one, you could build the whole feature this way, while being test-driven all the time.

In a Rails app, you will then use this Authentication object from the Rails controllers. This is also a good way of improving Rails controllers.

This approach is sometimes called top-down. You start with the expectation from the higher module (Authentication) and this drives you to implement the lower-level classes like User and Session.

Another approach is called bottom-up, where you start with the User and Session classes from the beginning and don't worry with the wrapping module, yet.

The different approaches may result in different tests. With the top-down approach you end up with module tests, while with bottom-up most likely you end up with a test per class.

Sunday, January 4, 2015

Ruby and instance_variable_get - when it's useful?

No comments:
When working on my book about refactoring Rails controllers, I've collected a number of "unusual" Rails solutions from real-world projects. They were either sent to me from the readers or I found them in some open-source Rails projects.

One of them was the idea of using instance_variable_get.

instance_variable_get in the Redmine project

I was working on one of the most important recipes from my book - "Explicitly render views with locals". It's important, as it opens the door to many other recipes - it makes them much simpler.

When I was practising this recipe, I tried to use it in many different projects. One of the open-source ones was Redmine. I applied this refactoring to one of the Redmine actions. I've searched through the view to find all @ivars and replaced them with an explicit object reference (passed to the view through the locals collection). This time it didn't work. The tests (Redmine has really high test coverage) failed.

The reason was a helper named error_messages_for(*objects). It's used in many Redmine views.


The implementation looks like this:


The idea behind this is to display error messages for multiple @ivars in a view. As you see, it uses the instance_variable_get calls to retrieve the @ivar from the current context. It's only used in cases, when a string is passed and not an @ivar directly.

This situation taught me 3 lessons.

1. I need to test my recipes in as many projects as possible to discover such edge cases.
2. A high test coverage gives a very comfortable situation to experiment with refactorings.
3. Ruby developers are only limited by imagination. The language doesn't impose any constraints. It's both good and bad.

I've extended the recipe chapter with a warning about the instance_variable_get usage:


The lifecycle of a Rails controller

After some time, I was working on another chapter called "Rails controller - the lifecycle". In that chapter I describe step by step what is happening when a request is made, starting from the routes, ending with rendering the view. Each step is associated with a snippet of the Rails code.

Everybody knows that if you have assign an @ivar in a controller, it's automatically available as an @ivar in the view object, even though they are different objects.

The trick here is that the Rails code uses instance_variable_get as well. It grabs all the instance variables and creates their equivalents in the view object using the instance_variable_set. Here is part of the chapter:


Easier debugging in rubygems code


In the rubygems code you will find this piece of code. It's a way of displaying the current state (@ivars) of the object. At first, it may look useful. I'm sure it served well for this purpose. However, the root problem here is not how to display the state. The root problem is that the state is so big here that it needs a little bit of meta programming to retrieve it easily. Solving the root problem (splitting it into more classes?) would probably reduce the need for such tricks.

When it's useful?

instance_variable_get is a way of bypassing the object encapsulation. There's a good reason that in Ruby we hide some state using @ivars. Such a hidden state may or may not be exposed in some way. I'd compare instance_variable tricks to using the "send" method.

My rule of thumb is that it's sometimes useful when you build a framework-like code. Code that will be used by other people via some kind of DSL. Rails is such an example. I'm not a big fan of copying the @ivars to the view, but I admit it plays some role in attracting new people to Rails.

However, it's a rare case, that instance_variable_get may be a good idea to use in your typical application.

The temptation to use it is often related to reducing the amount of code. That's a similar temptation to using the "send"-based tricks. I'm guilty of that in several cases. I learnt my lessons here. It might have been "clever" for me, but it was painful for others to understand. It's rarely worth the confusion.

I've seen enough legacy Rails projects to know, that sometimes the instance_variable_set trick is useful in tests. It's still "cheating", but it may be a good temporary step to get the project on the right path. Some modules/objects may be hard to test in isolation in a legacy context. Sometimes, we may need to take an object (like a controller) and set some of its state via instance_variable_set at the beginning of the test. It's similar to stubbing. This way we may be able to build a better test coverage. Test coverage on its own doesn't mean anything. However, good test coverage enables us to refactor the code, so that it no longer needs meta programming tricks.

When in doubt, don't use instance_variable_get.

Sunday, December 28, 2014

How to write a Ruby-related book - the tools

5 comments:
I have recently published the 1.0 release of the "Rails Refactoring: Rails Controllers" book. It took me over 1 year since I started. Over the time, I've experimented with many tools which helped me along the way.

This post is a summary of some of the tools.

The short version

iaWriter for the initial writing
vim/sublime for editing
LeanPub for PDF/epub/mobi generation
Github for storing the source files (markdown)
DPD for selling
MailChimp for email list
Trello for task management

The longer version

After I knew what to write about (the process how I came up with the "idea" is another topic - let me know if you're interested in reading about it) - I've had a bunch of notes, scattered across many tools - Evernote, Hackpad, MindNode, code repositories.

Despite all the temptations, I didn't try to find the best tool for writing. I know myself and that would result in endless experiments without any delivery. Obviously, there was a temptation to write such tools on my own. I'm very happy I didn't go this way.

The minimal requirements was to have a tool that generates a PDF.

Kitabu

The first tool that seemed to be good enough was  kitabu. It was a good tool - think Rails but for books, a whole framework.
They even give you the familiar "$ kitabu new mybook" tool.



Kitabu was a very programmer-friendly approach.

I eventually gave up on Kitabu. The problem was with the commercial PDF tool (PrinceXML) which is used by default. I was OK with paying for good software, but at that time, the license was about $2,000 which was a bit too high.

Scrivener

Then I switched to Scrivener. I loved this tool. I found it through some "real" writers forum. It's very popular among writers. It's a desktop app. It's great for notes collection, structuring, drafting. It's good at WYSIWYG. It has a built-in PDF generation. It supports export/import from markdown.



At some point, more people from my team started to help mi with the book. The Scrivener approach wasn't very team-friendly (or I wasn't able to use it). No easy way of two-way sync with markdown. Other people would have to be forced to use Scrivener as well. Also, it wasn't very version-control-friendly.

Leanpub

Meanwhile, Robert Pankowecki (with a little bit of my help) was working on the Developers Oriented Project Management book. He used Leanpub and this tool was very team-friendly. There was a repo, we could all contribute into. It generates PDF/mobi/epub (on their servers) via a Github hook.


The Github repo:


(as you see there were 300 commits so far and 13 people contributed to the book!)

I was sceptical at first, as they (at least at that time) didn't give emails from the book readers. I found it very limiting for marketing/sales activities. However, LeanPub lets you generate the PDF and take the files with you to sell somewhere else. That sounded perfect to us.
Over the last year, I've seen how Leanpub (the UI) keeps getting better. I may seriously consider switching to them as the selling tool as well at some point. They now also have an iOS app for reading your books.
So far, we're staying with Leanpub for our books (the third one is in progress).

My book is very code-heavy. LeanPub is great for it. They not only support syntax highlighting but they also let us present nicely the diff between two pieces of code.



One minor issue with Leanpub is that you need to wait for the PDF generation (30-60 seconds). I usually work locally, do frequent commits to the local repo. Every now and then I push to the github repo. This triggers a hook, which notifies Leanpub. After a while, I launch a simple bash script  - download_and_preview.sh - which downloads the newest PDF and opens Preview on my Mac (if it's already open then it reloads the file). This is a good-enough flow.




iaWriter

Leanpub lets me write in MarkDown. For some time, I've used vim for that. After a while, I realized that there are two aspects of writing. When I write the initial version of a chapter I need a full focus to let the flow work. iaWriter is a perfect tool for distraction-free writing, with a beautiful full-screen mode. This is where I do most of my initial writing.



When the draft of the chapter is ready, I need to do more editing (it's like refactoring but for normal text). This is when I usually launch either vim or sublime. The vim shortcuts lets me do editing faster.

When I'm done and ready for a new release of the book, I run the 'release' script:


This created a capistrano-like directory and creates a new .zip file.

DPD

I use the DPD system for selling. Unfortunately, they don't have the API to automatically upload a new .zip file and manually change the product configuration. I hate this part, as it's very error-prone. I even attached the wrong file to the wrong product once.


As for payments, DPD only supports PayPal (at least for a Polish company, last time we checked), so this is what I went with. I'm not a big fan of PayPal, but it didn't seem to be a huge problem for all the buyers.

Whenever I have a new release, I can send an email from the DPD panel to all the customers, which automatically generates the appropriate download URL.

As you may expect, DPD gives you all the reporting, charts, data which you may need.

Slack

Over the time, we've created notifications around the whole process. Whenever a new commit is pushed into the repo, a new notification appears on a special Slack channel. Whenever a new sale is made, we have a SalesPanda bot which notifies us about it to the same channel.




Summary

As you see, the whole process is a combination of multiple tools. In some places, they are integrated automatically, while some places are still manually updated. The perfect setup would be to have a Continuous Delivery and we're very close to such thing here. When the delivery is a PDF, the problem is that you would need to email people on every Delivery. This doesn't sound perfect, though. With this book, I think I've had 4 releases so far and this worked well - starting from February 2014 until now.

Feel free to email me at andrzejkrzywda@gmail.com if you need any help with writing a book. It might be overwhelming at first.

The landing page (pure html/css) of the book mentioned in this post is here: http://rails-refactoring.com