Security considerations for password generators

When I started writing my very own password generation extension I didn’t know much about the security aspects. In theory, any hash function should do in order to derive the password because hash functions cannot be reversed, right? Then I started reading and discovered that one is supposed to use PBKDF2. And not just that, you had to use a large number of iterations. But why?

Introducing Easy Passwords: the new best way to juggle all those passwords

“The password system is broken” – I don’t know how often I’ve heard that phrase already. Yes, passwords suck. Nobody can be expected to remember passwords for dozens of websites. Websites enforcing arbitrary complexity rules (“between 5 and 7 characters, containing at least two-upper case letters and a dog’s name”) doesn’t make it any better. So far I’ve heard of three common strategies to deal with passwords: write them down, use the same one everywhere or just hit “forgot password” every time you access the website. None of these are particularly secure or recommendable, and IMHO neither are the suggestions to derive passwords via more or less complicated manual algorithms.

As none of the password killing solutions gained significant traction so far, password managers still seem to be the best choice for now. However, these often have the disadvantage of relying on a third-party service which you have to trust or storing your passwords on disk so that you have to trust their crypto. But there is also this ancient idea to derive individual passwords from a single master password via one-way hashing functions. This is great as the only sensitive piece of data is your master password, and this one you can hopefully just remember.

Compiling C++ to JavaScript: Emscripten vs. Cheerp

Your JavaScript code is slow or needs too much memory? No problem, just rewrite it in C++ and compile back to JavaScript — you will get much better performance and the code will still run in any browser (or Node.js). Well, at least that’s what C++ to JavaScript compilers like Emscripten and Cheerp promise you. And often they can deliver, primarily thanks to heavy usage of typed arrays which allow modern JavaScript engines to optimize the resulting code much better than more traditional JavaScript. Also, the code is already preoptimized, with the C++ compiler recognizing calculations yielding constant results as well as inlining short functions.

I tried both Emscripten and Cheerp but the following isn’t exactly a fair comparison. For one, I spent much more time learning Emscripten than Cheerp, so I might have missed some Cheerp tweaks. Then again, I might have missed some Emscripten tweaks as well as I am by no means an expert in it. If you are still interested, enjoy the reading!

Mozilla: What constitutes “open source”?

I became a Mozillian more than twelve years ago. I’m not sure whether the term “Mozillian” was even being used back then, I definitely didn’t hear it. Also, I didn’t actually realize what happened — to me it was simply a fascinating piece of software, one that allowed me to do a lot more than merely consume it passively. I implemented changes to scratch my own itch, yet these changes had an enormous impact at times. I got more and more involved in the project, and I could see it grow and evolve over time.

Not all of the changes were positive in my eyes, so this blog post hit a nerve with me: is Mozilla still an open source project? How is Mozilla different from Google or Microsoft who also produce open source software? See Android for example: while being technically open source, the project around it is completely dominated by Google. Want to contribute? Apply at Google!

In my opinion, making the source code accessible is a necessary requirement for an open source project, but clearly not the only one. It should be possible for people to get involved and make a difference. I could identify the following contributing factors:

Using WebExtensions APIs in a “classic” extension

So WebExtensions are the great new way to build Firefox extensions, and soon everybody creating a new extension should be using that over everything else. But what about all the people who already have extensions? How can one be expected to migrate a large extension to WebExtensions and still keep it working? Chances are that you will first spend tons of time rewriting your code, and then even more time responding to complains of your users because that rewrite introduced bugs and unintended changes.

I don’t want to see myself in that hell, a gradual migration is almost always a better idea. So I looked into ways to use WebExtensions APIs from my existing, “classic” extension. And – yes, it works. However, at this point the approach still makes many assumptions and uses internal APIs, so the code example below is merely a proof-of-concept and should be used with caution.

Missing a rationale for WebExtensions

Mozilla’s announcement to deprecate XUL/XPCOM-based add-ons raises many questions. Seeing the reactions, it seems that most people are very confused now. I mean, I see where this is coming from. XUL and XPCOM have become a burden, they come at a huge memory/performance/maintenance cost, impose significant limitations on browser development and create the danger that a badly written extension breaks everything. Whatever comes to replace them certainly won’t give add-on developers the same flexibility however, especially when it comes to extending the user interface. This is sad but I guess that it has to be done.

What confuses me immensely however is WebExtensions which are touted as the big new thing. My experience with Chrome APIs isn’t all too positive, the possibilities here are very limited and there is a fair number of inconsistencies. The documentation isn’t great either, there are often undocumented details that you only hit when trying things out. This isn’t very surprising of course: the API has grown along with Chrome itself, and many of the newer concepts simply didn’t exist back when the first interfaces were defined. Even worse: Opera, despite using the same engine, occasionally implements the same APIs differently.

JavaScript Deobfuscator reloaded

A few weeks ago I released JavaScript Deobfuscator 2.0 — finally something that works with current Firefox versions again. Why did it take me a year to fix this compatibility issue? Well, it really wasn’t that simple. After considering all the possibilities I decided that rewriting it from scratch was the only possibility, and that was hard to accomplish in my spare time.

Before I continue with the technical details, allow me to introduce JavaScript Deobfuscator in its new reincarnation: it now adds a panel to Firefox Developer Tools. Instead of messing with filters your view is limited to the current tab automatically. Both compiled and executed scripts go into the same list, with some text indicating whether we’ve seen the script being compiled or executed or both. Starting with Firefox 39 even code running in Web Workers will be displayed. And JavaScript Deobfuscator will beautify the code instead of relying on the JavaScript engine to do so.

Don’t forget to check the facts – because nobody else will

This experiment reminded me of another hoax I became aware of a while ago. A family member told me how food preservatives would become an issue for cemeteries because bodies would no longer decompose, not even after decades. Supposedly, specialists all over the world are noticing that problem but cannot do anything about it as long as we are on such an unhealthy diet. Where they had this from? Well, they read it in a respectable Russian newspaper.

This sounds reasonable at first. After all, food preservatives are meant to keep bacteria in check, probably the very same bacteria that are responsible for decomposing human bodies after burial. But wait, even preservatives in food won’t keep it fresh forever. And in order to have some effect, preservatives would have to be everywhere in the human body in a concentration comparable to the concentration in food. That would be rather unexpected because normally our digestion dilutes everything. Some substances have the tendency to be accumulated in our body but that accumulation only happens in certain body parts.