Do you trust your bank? I would like to think that the institution that manages all my money has the sense to hire software engineers that know to properly secure information. I mean, after all, they are being trusted to protect millions if not billions of dollars. Sadly this is not always the case. Luckily there are signs of an organization that doesn’t know what the hell they’re doing.
Maximum Password Length
Most web sites these days require some minimum password length. This is because generally longer passwords are harder to crack. What is troubling is a web site that has a maximum password length. Usually input is restricted in length so users can’t flood a database with information. Passwords are different though. Properly secured passwords are hashed with a one way hashing functions that have fixed length output. This means if there is a maximum password length restriction a one way hash function isn’t being used and your password is being stored in an insecure manner.
Emailing Your Password
Email is sent as plain text. That means anything sent in an email can be read by anyone watching the network. Because of this, it’s a good practice to email users a password reset link when they forget their password. This prevents an attacker from learning your password by reading your email. If your bank emails you a new password when you forget your old one there is a problem.
This Really Happens
In case you’re wondering what prompted me to talk about these two issues in particular, I recently noticed a real life financial institution (which shall remain unnamed) that did both of these things. Needless to say, I wont be trusting them with any of my money.
This is a story of how coding at thirty thousand feet helped me to write more testable code.
It started out simple enough. I decided I wanted to write a static site generator in Ruby. But it wasn’t going to be like all those other static generators focused on making blogs. My site generator would be a simple solution for building simple sites. That doesn’t really matter though. What does matter is that I hadn’t written any Ruby in years, and I was flying from Dallas to San Francisco on the day inspiration struck.
I knew how I wanted everything to work. I would read in a YAML file describing a site, then based on that file read in various other files containing Haml, Sass, CoffeeScript, etc. and spit out some ready to serve static assets. Normally this would turn into a quick and dirty hack with about a dozen tabs open in my browser as I looked up various libraries and Ruby documentation. But I was in a metal tube in the sky.
Now, I could have shelled out the money to pay for some of that WiFi they put in planes these days, but I decided that would be too easy. I took it as a challenge. I would prove that I was capable of programming without the crutch that is the Internet. This idealistic dream quickly fell apart when I realized I didn’t have any of the libraries I needed. That didn’t matter though. I could lie.
I set to work writing my application with complete disregard for the infrastructure I lacked. Any time I needed something I didn’t have, I wrote a fake version. My editor soon filled with functions like read_config and render_haml. By the time the flight ended I had a perfectly crafted piece of logic that did absolutely nothing. But then I realized the unintentional brilliance of what I had done.
My core logic was a standalone unit with no dependencies. Every integration point had been perfectly mocked out of necessity. I had accidentally crafted the test driven ideal. All that was needed were some very minor alterations to make my fake functions easily injectable and everything was done. What started as a quick hack became a well designed testable application all because I didn’t have an Internet connection.
How do you maintain privacy on the Internet? If you’re anything like me you probably don’t like the idea of other people having access to things like your email and search history. Luckily there are options. I can’t feasibly go into detail about setting up all of these solutions, but I can at least show you what you have to work with.
The first step in maintaining online privacy is understanding your browser’s privacy controls. Most major browsers, such as Firefox and Chrome, have options to configure what data gets stored on your machine, as well as an option to tell web sites not to track you. Features like tab syncing are convenient, but remember that information is being stored somewhere.
Most search engines keep some sort of history about your searches. If it’s a search provider you are logged into, this history is probably linked to your account and not just the machine you’re using. If you want to make sure your search history is private, use a search engine such as DuckDuckGo that doesn’t store any information.
If you want to make sure no one can see what web sites you’re visiting, then you want Tor. When using Tor, your browsing activity is hidden by routing through a series of proxies. Tor is available as a browser plugin, but you can also download a pre-configured browser bundle if you want a simpler option.
Keeping instant messages private is as simple as using a client with OTR (Off-the-Record) support. Pidgin has a solid OTR plugin and Adium has support out of the box. Setting up an OTR conversation is relatively automated, you’ll just need to verify that the other user is who they say they are.
While instant messaging is typically secured through OTR, email tends to be secured with the more general purpose PGP (Pretty Good Privacy) encryption. PGP is a a form of public-key cryptography that most mail clients support either natively or through plugins. Setting up PGP is more involved than OTR, but you can use it for encrypting anything.
All the technology that exists to help maintain your privacy is useless without some common sense on your part. Any data you send to a web service may be stored by that service and later exposed. It’s up to you to make sure any data you send over the wire is either encrypted or safe to share with the world.
I have years of experience with pencil and paint, but I’ve never really used ink before. Needless to say my first week with a technical pen proved interesting. Ink has precision and permanence that I’ve never explored, but it makes for some stunning line drawings.
Before the ink though, an old piece finally scanned.
Now for the ink. As a warm up I did a simple sketch of an eye. It’s nothing special but it helped me adjust to the pen.
And finally, my first serious attempt: Radical Edward.
I’m not sure what compelled me to start drawing again, but after spending years without so much as picking up a pencil (a real pencil that is, not the #2 nonsense you use for writing) I opened up my sketchbook and went at it. As I expected, time had substantially degraded my abilities, but I was still able to form basic shapes. With that in mind I decided to attempt a simple line drawing.
The tools for the job were a Sanford 3B pencil for laying down the initial outline, a General’s 6B charcoal pencil for the final outline, and a Sanford Magic Rub eraser for cleaning it all up. I decided to use charcoal instead of a more sane medium like ink because I wanted to create a smooth gradient for the character’s iris.
Despite my lack of practice and odd choice of medium I’m happy with the result. Just try to ignore the part where I took a picture of it with my phone because I don’t own a scanner.
By September 2008 I was starting to get a little annoyed with Firefox. It had been my browser of choice for years, but the wonder of the open source world was beginning to disappoint me. I had been running the beta version of Firefox 3 since the first day it was available and the entire time all I could think about was how bloated and slow it was. I’ve always been a proponent of minimalism in design and Firefox was quickly becoming anything but minimal.
Now back to Firefox. It’s been five years since I’ve used Mozilla’s browser for anything more than a quick check to make sure a web page I’m building renders correctly. In that time I failed to notice the work that was being done. Firefox got fast. Side by side with Chrome I’m seeing pages render visibly faster. SpiderMonkey seems to have caught up to V8 as well. My own (non-scientific) testing showed V8 as still being slightly faster, but the difference was too small to be perceived by humans.
Then there are the developer tools. The WebKit developer tools still have a few features not present in Firefox, but for 95% of what I do the tools in Firefox are actually better. In the inspector higher contrasting colors and vertical rhythm make it much easier for the eye to trace over the wall of text that is the unrendered DOM. On the page, selected elements are outlined with a subtle but easily visible dotted line instead of a page obscuring blue box. There is also a select on hover mode that allows me to very quickly move through components on the page until I land on the one I want. The Firefox developer tools not only feel better, but also save me time selecting and manipulating elements on the page.
Technical issues aside, there’s something else that’s been in the back of my mind for a while now. I honestly believe Mozilla is committed to freedom and privacy on the web. Google is committed to making money and knowing everything I do. Firefox greets me with a page explaining my rights as a user of open source software. Chrome greets me with… sigh… Chrome greets me with a fucking advertisement for a Chromebook.
Right now I’m feeling a bit nostalgic. Firefox today reminds me of Firefox when I first discovered it. Mozilla has once again delivered a technically superior product while completely respecting my rights as a user. Firefox is freedom.
The Hacker News post about creating a one line browser notepad reminded me of a neat trick one of my friends showed me. Try loading this html into Chrome (or any browser that supports contenteditable="plaintext-only").
You should immediately notice the style element is now visible. Moreover you can edit it and see the changes reflected on the page. Because content-editable is set to plaintext-only, you can even add new lines to the style without pesky paragraph tags getting inserted and breaking everything.
Submitting to code reviews from my coworkers has forced me to come to terms with one of my shortcomings as a developer. I’m afraid of changing code. Whenever I’m forced to interact with a poorly written component, I often try to work around it rather than replace it. Unfortunately all of my reasons for doing this are complete bullshit.
“I’m just conforming to the design.”
By conforming to a poorly designed system I’m only increasing technical debt. If anyone ever does decide to replace the code, whatever I’m writing now is just one more thing that will need to be converted with it.
“This will all get rewritten eventually.”
Code wont fix itself, and I can’t assume someone else is going to do it. If I don’t start changing things here and now the code will never improve. Software development is iterative so I must live in the moment.
“It will take too long to change.”
While it may be true that implementation takes longer in the short term, the long term time savings of cleaner code will far outweigh the time saved now. Less time spent writing tests, fixing bugs, and adding features will all add up.
“I don’t know enough to change it.”
Rewriting a component is the quickest way to figure out how it works, and taking the time to learn will increase my understanding of the system as a whole.
“I might break something.”
It doesn’t matter if I break something. That’s what version control is for. I can always revert if things go catastrophically wrong.
The future is now and I have no reason to suffer poorly designed code. No more conforming to anti-patterns. No more hiding behind abstraction layers. No more waiting on someone else to fix it. The only way software will improve is if bad code is mercilessly hunted down and destroyed. Kill it with fire.
From the land of the rising sun comes two starkly contrasting films. One a classic teen horror flick set in a haunted Japanese mansion, the other an intense psychological thriller set in a modern cityscape. Spanning the entire spectrum of what might be considered horror, both are certainly worth watching.
My first attempt to find a good Japanese horror film yielded something more comedic than scary. This supernatural flick reads like classic horror on the surface, following a group of school girls as they are killed off one by one while staying the night at a mysterious mansion. Unfortunately over the top comedic relief and special effects that were beyond the technology of the time distract from any dramatic tension. Despite all of this, the originality of the film still manages to shine through providing an overall entertaining viewing experience.
At the other end of the spectrum is Audition. It is horror refined to its purest form. There are no monsters or jump scares to be found in this film. The viewer is saddled with a general sense of uneasiness that slowly builds until finally exploding into raw and intense psychological terror. Audition is incredibly effective at tapping in to a deep level of the human psyche and causing a great deal of discomfort. At times almost impossibly painful to watch, this film stands as one of my favorites.
In object oriented design, it’s common to code against some sort of interface that will be used across your application. For example, if you were making a video game (yes I have a one track mind) you might design your graphics, physics, and sound engines to all interact with an entity that conformed to an interface for describing its position. Perhaps that entity would look something like this:
This is pretty straightforward. Each entity has a position method that returns a tuple with x and y coordinates. If you were building everything from the ground up, you could easily design it all to follow this convention. In reality, you’re probably going to be using several third party libraries that will need to be coerced into working together.
Let’s say you lucked out with the graphics engine. It expects to see entities with a position method that returns a tuple of x and y. The physics engine on the other hand is looking for a method called pos. The sound engine is different still, looking for a location method. They all expect the same tuple structure to be returned, so you decide to just proxy pos and location to position.
Doing it Wrong : Direct References
This is simple, right? You’ll just update your Entity a little bit with some additional references to the position method.
This seems to work great at first. All the components of your game are able to query your Entity object using whatever method they want, and get back the result they expect. So what’s the issue?
A little later you decide it would be good if you had an entity class that calculated its location relative to some parent in order to make scene graphs. If you’re unfamiliar with game engines this how it’s usually done. This should be pretty simple with the magic of inheritance. Just make a subclass that implements its own position method.
At this point you would start seeing some problems. The graphics will all be correct, but the physics and sound engines will be completely broken. While the position method was overwritten, pos and location are direct references to Entity’s position method. In order to fix this using the current design you need to do more to RelativeEntity.
Every class that inherits from Entity will need to have those two lines added. This is really repetitive and error prone. Furthermore, the need to add these lines will be non-obvious to other developers.
Doing it Better : Proxy Methods
A better solution is to actually use proxy methods instead of making direct references. This isn’t much more complicated than the original implementation.
Now pos and location are their own methods, that will evaluate self.position at call time. This means if you override the position method in a subclass, the pos and location methods will call the overridden method. Now the RelativeEntity class only needs to define the position method.
Doing it Much Better : Proxy Classes
Why stop at proxy methods. Let’s go all out and make some proxy classes. The entity class will be super simple now.
While this approach is significantly more verbose than just using proxy methods, it has several advantages. Code is moved out of the Entity class and into modular proxy classes. If you ever need to add a proxy for another system, you just create a new class. Entity remains clean and maintainable, while the system is a whole is made more extensible. Furthermore, the explicit nature of invoking the proxy classes makes the behavior of your code much more obvious. Everything is now easier to read, modify, and extend.