Faster extension development cycle: install changes automatically

The usual extension development cycle is less than optimal: change something, create a new extension build, install it in the browser (gonna love warnings), restart the browser, finally test it. I don’t like repeating this cycle all the time and so in the past years I’ve been using a test environment in which most extension files are loaded directly from my source code checkout (thanks to a manipulated chrome.manifest file). With this test environment many changes could be tested by simply reopening the extension window, for others you would restart the browser.

Unfortunately, this approach is less useful when working on restartless extensions — it requires me to restart the browser where updating the extension without a browser restart should be sufficient. And it completely broke down when I started testing Adblock Plus on an Android phone. My development machine and the browser where I test the extension are physically separated now. Add to this that getting Firefox Mobile to install an extension from the development machine is a rather complicated affair, and the result is that most of my “development” suddenly has nothing to do with writing code.

Random thought on communities

Being in charge of a popular project has its highs and lows. On the one hand, creating something that is used by many people can be highly rewarding. You have a large community that supports you, there are many people willing to do their part. But then there are times when an unpopular change needs to be done, and as your community grows almost any change will make you unpopular with somebody. All the sudden you get people yelling at you — lots of people suddenly need to tell you how stupid that change is and what you should have done instead. It’s highly demotivating and makes you want to avoid uncomfortable changes. But that’s a dead end leading to a dead project.

I’ve seen people bashing Mozilla this way regularly recently (I am probably not without guilt myself), I can only imagine how hard it is to be a Firefox developer these days. Same thing happened with the Adblock Plus project a number of times and is happening again right now. Sure, the new “acceptable ads” feature is highly controversial in a community that is dedicated to blocking ads. Still, after putting countless hours into this project (unpaid until relatively recently) it is very disappointing to see how lots of people out there don’t want to believe that I possess some basic intelligence and integrity. It’s a good thing that we founded Eyeo — otherwise I might just give up and let them do it instead.

Google Chrome and pre-installed web apps

Google recently launched a redesigned version of its Web Store where one can install extensions and web apps. One particular feature caught my attention: it marks the extensions that you already have with a check mark. How does the web page know which extensions you have installed?

Turns out the answer is simple. The Web Store is a pre-installed web app (actually, it is even hardcoded into the browser). Web apps in Chrome can have special privileges if they request them, same as extensions. A look at the Preferences file shows the privileges of the Web Store app: management API and webstorePrivate API. The former allows querying your installed extensions which explains how the website learns about them. But it can do more: enable or disable extensions and even uninstall them without any kind of visible notification.

EU MozCamp, theme development, add-on localization with adofex

I’m still at the EU MozCamp 2011 but Mitchell Baker already gave her closing speech and things are wrapping up. It has been an interesting weekend, a bunch of add-on related sessions among other things. One interesting conclusion that I made from the discussion: the rapid releases aren’t a real issue for extension developers and don’t create more work. Ok, I’ve suspected that much already but it was nice to have other add-on authors confirm this. In the discussion session with extension developers this topic didn’t even come up, as opposed to localization for example which is a significant pain point. AMO’s automated compatibility checks for extensions are working nicely and mark most add-ons as compatible already during the Aurora phase. There are plans that go beyond that as well and it sounds like extension compatibility will mostly become a non-issue for end users in a few Firefox releases (at least as long as binary XPCOM components aren’t used).

But the rapid releases are a huge problem for theme developers and there is no good solution. Theme developers have to track all user interface changes and add corresponding changes to their themes — every 6 weeks, no way around it. So I was wondering what happened to a proposal that was discussed a few years ago: have all the default styles “built-in”, the default theme would simply be empty. Other themes wouldn’t have to duplicate all the rules of the default theme, they would merely override the rules that they want to change. And changes to the user interface wouldn’t cause catastrophic failure: even if the theme fails to adapt, the default rules would still be there. In the best case scenario there wouldn’t be any issue at all, in the worst case the new user interface element would look somewhat out of place.

Binary XPCOM components are dead – js-ctypes is the way to go

Daniel Glazman is shocked to see how hard shipping binary XPCOM components with an extension became now. Fact is, we simply didn’t notice the hidden message of blog posts announcing dropping binary compatibility (meaning that your component needs to be recompiled for each new Firefox version, no matter how simple it is) and rapid releases — binary XPCOM components in extensions are deprecated. Theoretically, somebody could still continue using them but it requires so much effort that nobody can be expected to do that. Unfortunately, I haven’t seen it said like that anywhere, hence this blog post. There is still tons of documentation on binary XPCOM components on MDN and no deprecation warnings. Even XPCOM changes in Gecko 2.0 page lists all the important changes without making any conclusions.

In reality, as an author of an extension that relies on binary code you should start looking at js-ctypes. If the point of a binary component was simply calling some platform functions — js-ctypes can do that for you. If you require some native code (e.g. for cryptography functions that would be too slow in JavaScript) — you can move it into a regular native library and ship that library with your extension. If you do that, don’t forget adding <em:unpack>true</em:unpack> to your extension’s install manifest, the library can only be loaded if it is unpacked into a file on disk. Use AddonManager.getAddonByID() and then Addon.getResourceURI() to locate the library on disk and open it with js-ctypes.

Running Linux in the browser

I haven’t seen it mentioned on Planet yet, could it be that nobody heard? I’ve seen lots of cool browser demos lately but this one really blows me away: jslinux by Fabrice Bellard. This is a real x86 emulator written in JavaScript and running Linux, not a fake Linux terminal. The emulated hardware is somewhat limited (e.g. no FPU) but this doesn’t make packing an emulator into less than 20 kB of JavaScript code less impressive. The emulator loads a bunch of binary Linux images and — voilà, Linux boots up.

It is also really fast, booting up takes 7 seconds in Firefox 4 for me, and all the other operations don’t take much longer than they would on a real system. Supposedly, Chrome 11 is also supported but for me it hangs up near the end of the boot process. Also, Chrome 11 is noticeably slower (Fabrice Bellard himself made this observation as well). There are plenty of command line tools available, including a compiler. I first had my doubts — but they all work as you expect them to. I can even ping 127.0.0.1 (pinging other addresses or using wget fails because the emulated hardware lacks a network interface). Update: Even better: run telnetd, change root password with passwd, then telnet to 127.0.0.1 and log in as root — it actually works.

Finding security issues in a website (or: How to get paid by Google)

I received a payment over $2,500 from Google today. Now the conspiracy theorists among you can go off and rant in all forums that Adblock Plus is sponsored by Google and can no longer be trusted. For those of you who are still with me: the money came though Google’s Vulnerability Reward Program. Recently Google extended the scope of the program to web applications. I took up the challenge and sure enough, in a few hours I found four vulnerabilities in various corners of google.com.

Now to make this clear: Google has a very capable security team with great response times (yes, Yahoo!, I am looking at you). They have proper security review processes in place and generally the security of their web applications is pretty good. If you go after their popular applications like search or Gmail or YouTube you will pretty soon discover that you need to invest more time than the bug bounty justifies. However, if you look around on google.com you will notice that it is home to many more web applications, most of which are rarely looked at. And guess what: finding vulnerabilities in these moldy corners is a lot easier. It probably won’t stay this way but right now Google seems to be overpaying for the vulnerabilities found.

And that is the first lesson of web security: you cannot invest into securing one application but ignore others. If you know that one application is less secure, at least move it to a different domain where it cannot be used to compromise other applications (at least as XSS goes). Which still might turn out badly if a security vulnerability in that application allows an attacker to compromise the server.

It so happened that each of the four vulnerabilities I found is different but each is typical in some way. I’ll describe them here as an example of what can go wrong in web development. Who knows, maybe it helps somebody to prevent making the same mistake.

XULRunner in large projects, part 4: Localization pitfalls

I am back from the Mozilla Summit and somewhat managed to process all the new information I got there. But instead of posting yet another summit summary or more summit photos (what, you didn’t know how great this summit was?) I have a far more boring topic for today: localization of XULRunner-based applications.

I mean, what is there to say about localization? It is really very simple. Some magic in the chrome:// protocol makes sure that whenever a file in the locale “subdirectory” is accessed one of the available locales is selected and the file is loaded from there. This automatic selection mechanism works very well and will select the locale that is closest to the value of the general.useragent.locale preference.