Scribbles and Bits

by Derek Gathright

Twitter vs Ecosystem

So Twitter is once again in some hot water with their developer community. After a well-intentioned, but poorly executed suggestion to their developer community that they stop working on developing clients and instead work on “vertical” ideas, the feedback they’ve received has been less than ideal. You can read the original post at “consistency and ecosystem opportunities”, and some of the media coverage here, here, and here.

Some developers are understanding, others are irate, and many are still pretty confused about what exactly this all means. I guess I’d put myself in the 3rd (“confused”) group. But I guess the one thing I do know, is that it is clear these “suggestions” and TOS updates are directed at UberMedia in order to prevent them from forking their portion of the Twitter user base. But, now the rest of the community has now unfairly been dragged into the mix.

As someone who has been doing development on Twitter-related projects since 2007, I figured I’d throw my 2 cents into the mix and give Twitter an idea of where they went wrong, and how they can fix it. I figured it was worth a repost here on my blog.

Link to post.

My 2 cents… The reason for the perceived mixed messages for some of us is because many developers don’t, and never have been interested in doing Twitter development as a business. I’ve created a dozen Twitter clients & apps over the last 5 years, some of which received enough users and press coverage that I could have attempted to turn it into a business, but I didn’t. Why? Because it doesn’t interest me. I do it for the challenge and the learning experience. So, the things we hear Twitter saying are “Don’t build clients anymore” as well as “Client apps make bad business”. Well, first, as long as the APIs are active and it’s not against the TOS, I’m still going to build, develop, and use my own clients. Second, I don’t care that it makes “bad business”, that’s isn’t a concern to me. Third, developers can determine for themselves what seems like a smart business decision or not. Fourth, frankly, Twitter Inc has never been regarded as an expert in monetization strategies. Plus, this is info we already knew. For the most part, building a company whose main product is a Twitter client hasn’t been a good business decision for a few years (if ever, outside of a lucky few). But on the other hand, there are still markets where it could be good business. For example, where is the official Twitter client for webOS? Messages like “Don’t build clients anymore” and no official Twitter app on webOS does nothing but hurt the ecosystem for thousands of users. If I were a developer for one of the popular webOS clients, I’d be pretty pissed right now. Heck, as a webOS user I’m not thrilled. I’m sure this is applicable to other ecosystems too. The point is, Twitter should be more vocal about what it is going to do as opposed to coy suggestions to developers (which some perceive as threats) about what they shouldn’t do. Twitter is going to heavily focus on front-end user experiences across all platforms? Great! Leave it at that. Let developers decide for themselves what are good/bad ideas. Just arm us with the knowledge of your plans, and we’ll worry about our own. Finally, Twitter, you should be excited to compete with your developers. Much of the innovation over the years has been a product of the developer & user community. Things like mentions & hashtags came from your users. Features like saved searches, lists, trends, and ajax driven clients were inventions of developers years before they made it into Twitter.com. Essentially, “New” Twitter is just a compilation of the best features from all the 3rd party clients. Do not be hostile. Do not attack them with your TOS. Do not suspend tokens without working with the developer first. Doing these things hurts the community, which in turn hurts you. Your users are your product. Not your platform. Not your website. Not your ads. Your users. - @derek

Installing Npm on webOS 2.0

NPM + webOS Now that webOS 2.0 ships with Node.js, one of the first things I tried to do when I got the webOS 2.0 SDK a while back was get npm installed. While successful, it took a little bit of work, so figured it was worth a post to help aid anyone else trying to get it installed. For those that aren’t familiar with npm, it is a package manager for Node.js (Node Package Manager). In short, it’s a easiest way to get Node.js modules installed on your system. It is Node’s equivalent to Ruby’s Gems, Ubuntu’s APT, PHP’s PEAR, and Perl’s CPAN. So instead of manually downloading libraries/modules, explicitly including them in your source code, and having to manually resolve dependency issues, you can just let npm handle that for you. Now, installing a new module is as easy as typing npm install <module>. The version of Node.js that webOS 2.0 ships with (at the moment) is v0.1.102, which is rather old. The build scripts for the latest npm installer does not work with older versions of Node.js, so with trial and error, the most recent version I’ve been able to install on webOS 2.0 is npm v0.1.23. Luckily it’s pretty easy to install that specific version, so here’s how you do it on your webOS device.

If you are looking for a list of packages, check out http://npm.mape.me/. Or, you can just type npm ls. Are there any plans for npm to be included in webOS? HP/Palm engineers confirmed at the webOS Developer Day event a few weeks ago that there are no plans for npm to ship with webOS. That’s fine with me. Modules should be included in the application package anyways.

Secure OAuth in JavaScript

Wouldn’t it be awesome if we could use OAuth in JavaScript-only apps? JS is a powerful, expressive programming language, and the browser engines are getting faster and faster all the time. Why not use JavaScript to conduct your API calls and parse your data? In many cases, it is unnecessary to maintain a server-side proxy if all it is doing is making API calls for you and hiding your OAuth keys.

Think about this… If you don’t need any server-side processing, your applications suddenly become infinitely scaleable, right? We could host on the cheapest of cheap commodity hosting. Heck, if all we’re doing is serving static HTML/CSS/JS files, just throw it on a CDN like S3 or CloudFiles and pay per GB.

Before you get too excited, realize that there is a fundamental problem with OAuth in JS. Because JavaScript in the browser is “view-source”, you are always forced to expose your consumer key pair, which compromises the security of your application. sigh

For example, when Twitter recently deprecated their Basic Auth services, that left OAuth as the only authentication method. It was supposed to be the death of JS-only Twitter apps. This was unfortunate for quite a few developers who leveraged the browsers ability to do Basic auth, to help with scaling their Twitter apps. I know, I was one of them.

So then I began to think what if you weren’t forced to expose your keys? What if your JS app could talk to any web API out there, in a secure, user-authenticated way?

Is that actually possible? Yup.

Backstory

Unknowingly at the time, my quest for a JS only OAuth app began two years ago.

When TechCrunch covered the launch of my Twitter client, the app pretty quickly died from the traffic they were sending my way. The problem is 90% of it was written in PHP and used a relational database to store waaaaaay to much data. Neither of them were designed to scale to 20k users in just a few minutes. After days of tweaking and optimizing, I finally gave up on the design. I realized I didn’t need PHP to parse the data, or a database to host the data, so I began a rewrite with the goal of removing as much server-side code as possible. I threw away the database, moved off expensive EC2 and onto commodity hosting where it worked great for the next year or so with some occasional tweaking. As hard as I tried, I never thought I’d be able to completely get rid of the backend because I needed a proxy to securely handle the OAuth requests to Twitter. “That’s ok, close enough” I thought.

One day I was reading the Yahoo Query Language documentation, and I came across a section about using YQL’s storage API to hide authentication info to be used in your queries. Ah ha! Could I actually use that for OAuth? I set to find out. I began learning the ins & outs of OAuth, which includes reading RFC 5849: The OAuth 1.0 Protocol many, many times, and staring at the OAuth Authentication Flow diagram for loooooong time. By the end of the weekend, I had successfully modified my recently rewritten Twitter client’s code-base (now YUI3 based) to remove all server-side programming.

Finally! A secure, pure JavaScript solution to OAuth.

Some Prep Work

So let’s crack the code of what is necessary to do OAuth securely in JavaScript.

  • You cannot store your consumer keys inside your JS code. Not even obfuscated. But it has to be stored somewhere web-accessible so your JS code can talk to it.
  • Because of the same-origin policy, that ‘somewhere’ has to be the same domain as your JS app. Unless of course you only rely on HTTP GET, in which case you can do JSONP.
  • Your storage location cannot transmit your consumer key pair back to you. So that means it needs to do the OAuth request on your behalf.

So hmm…. what is web accessible, can talk to APIs, and also has data storage? YQL.

Yahoo Query Language

YQL is an expressive SQL-like language that lets you query, filter, and join data across web servers. Along with YUI, it is by far my favorite product Yahoo has for developers. Both are simply amazing tools. I won’t go into detail on the specifics of what YQL is in this post, and instead point you to slides from one of my recent talks on the subject here (best viewed in Chrome). All you need to know for this post is that you can use it to access any web-accessible API. In the case of this post, we’ll talk to the Twitter API.

So now that we know it is possible, let’s see it in action.

How It Works

First let’s take a look at how you would call your Twitter friends timeline via YQL w/ OAuth. Using my @derektest user, I created a new OAuth app at dev.twitter.com and used the keys it generated for my user/app combo to generate this YQL query.

SELECT * FROM twitter.status.timeline.friends
WHERE oauth_consumer_key = '9DiJt6Faw0Dyr61tVOATA'
AND oauth_consumer_secret = 'XBF9j0B2SZAOWg44QTu6fCwYy5JtivoNNpvJMs6cA'
AND oauth_token = '18342542-NkgUoRinvdJVILEwCUQJ3sL2CIm2ZwzS5jjj2Lg7y'
AND oauth_token_secret = 'D6ewAzsueTzQmrAJGFH0phV5zgWT88FOtcMeqW4YeI';

So take that query, URL encode it, and throw it into a URL querystring. Like so… https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20twitter.status.timeline.friends%20where%20oauth_consumer_key%20%3D%20‘9DiJt6Faw0Dyr61tVOATA’%20AND%20oauth_consumer_secret%20%3D%20’XBF9j0B2SZAOWg44QTu6fCwYy5JtivoNNpvJMs6cA’%20AND%20oauth_token%20%3D%20’18342542-NkgUoRinvdJVILEwCUQJ3sL2CIm2ZwzS5jjj2Lg7y’%20and%20oauth_token_secret%20%3D%20’D6ewAzsueTzQmrAJGFH0phV5zgWT88FOtcMeqW4YeI’%3B&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys

That unique URL will give you a list of the people @derektest follows (which is only @derek). You can play around with the query in the YQL Console, or view the results in an XML feed.

But there’s a problem using that query, because? You guessed it, you’ve exposed your consumer key-pair. So let’s work on hiding those.

First step, turn the embedded parameters into environment variables by using the SET command.

set oauth_consumer_key='9DiJt6Faw0Dyr61tVOATA' on twitter;
set oauth_consumer_secret='XBF9j0B2SZAOWg44QTu6fCwYy5JtivoNNpvJMs6cA' on twitter;
set oauth_token='18342542-NkgUoRinvdJVILEwCUQJ3sL2CIm2ZwzS5jjj2Lg7y' on twitter;
set oauth_token_secret='D6ewAzsueTzQmrAJGFH0phV5zgWT88FOtcMeqW4YeI' on twitter;
select * from twitter.status.timeline.friends;

Now that we’ve turned all the parameters into environment variables, the next step is to throw the consumer key pair into YQL’s storage so only YQL can access it.

To do this, create a YQL environment file, similar to this one, http://derekgathright.com/code/yahoo/yql/oauthdemo.txt

As you’ll see, it’s just a regular text file where I pasted my consumer key pair, along with importing the YQL community tables using the ENV command. Since we’re replacing the previously included env file (store://datatables.org/alltableswithkeys) with our own, we need to chain-load it back in because it includes the Twitter tables. If you miss that step, you’ll get a “No definition found for Table twitter.status.timeline.friends” error.

Before we store the env file in YQL, let’s test it with this new query:

set oauth_token='18342542-NkgUoRinvdJVILEwCUQJ3sL2CIm2ZwzS5jjj2Lg7y' on twitter;
set oauth_token_secret='D6ewAzsueTzQmrAJGFH0phV5zgWT88FOtcMeqW4YeI' on twitter;
select * from twitter.status.timeline.friends;

Also, you’ll have to change the env file loaded in the querystring to “?env=http://derekgathright.com/code/yahoo/yql/oauthdemo.txt

(View: YQL ConsoleResults)

Now that we have our environment file created and tested, let’s tell YQL to import it. To do that, we’ll construct a YQL query similar to:

insert into yql.storage.admin (name,url)
values ("oauthdemo","http://derekgathright.com/code/yahoo/yql/oauthdemo.txt")

Which returns:

 
       store://derekgathright.com/oauthdemo


       [hidden]

You now have 3 keys pointing to your data, and each does something different (think: unix permissions, R/W/X). For more information on what each of the 3 does, Using YQL to Read, Update, and Delete Records.

For this example we want the execute key, which is really just an alias to our stored env file. So if we change our query’s URL to ?env=store://derekgathright.com/oauthdemo and use the same YQL query as last time, you’ll see we have now hidden our consumer key pair from the public.

(View: YQL ConsoleResults)

Well there you have it, an example of how to hide your consumer key pair, which now allows you to use YQL as your server-side proxy as opposed to writing & maintaining your own!

A Pure JS Twitter Client is Born

When I started at Yahoo, I wanted an excuse to learn YUI3 and expand my knowledge of YQL. So porting my jQuery/PHP based Twitter client seemed like a logical choice. The result of this work is an open-source project I call Tweetanium. I’m not going to argue it is the most polished or feature-rich Twitter client. In fact, it is quite buggy, and will likely always be that way. It’s just something I toy around with occasionally to try out new things. But feel free to use it if you like. You can play around in it at tweetanium.net.

As proof that there is no server-side JS, you can even use a version of it hosted on Github Pages, which is a static file host (no PHP, Ruby, Python, etc…). Hosting off Github Pages was a neat test for it, which basically proves you can host JS-only apps on commodity hosting. If you actually need to process data externally, you can use YQL tables for any APIs on the web, even your own custom-built ones (See: YQL Open Data Tables). Any scaling bottlenecks have now been offloaded to Github and Yahoo. The best part about this solution? It’s free!

Post some comments if you have questions.

UPDATE: A few people have asked, “But can’t I execute YQL queries with your consumer keys now?” The answer is, yes. But that isn’t as bad as you think because you only have half of the keys necessary. You are missing the unique keys assigned to a user on behalf of my application, and without those, you cannot make authenticated calls. If you get those, well… there’s a whole other security issue of you having physical access to their computer or man-in-the-middle attacks.

Ok, but can’t I authenticate new keys posing as your app?” To my knowledge, Twitter does not currently support the oauth_callback parameter, which allows the requester to Twitter to redirect the user to the URL of their choice. So if EvilHacker tries to authenticate InnocentUser using my consumer keys, InnocentUser will just be directed back to my app’s preset URL stored in Twitter’s database. In the future, who knows how the OAuth spec, or Twitter’s implementation of it, will change. This is mostly a proof-of-concept hack at this point.

High Performance JavaScript (Book Review)

When I saw on NCZ’s blog that he was writing a new book on JavaScript performance techniques, I instantly went to pre-order it. Having partially read through High Performance JavaScript by now, I figured I’d start writing a review of this excellent book.

Since JavaScript is such an expressive language, there are dozens of different ways to do the same thing. Some of them good, some mediocre, and a lot of them bad. It’s amazing how much awful JS info is on the web, all leftover from the dark ages of JS (‘96 – ‘05). Up until this point, we haven’t had an authoritative source on the topic of how to write JavaScript that performs well, both in and out of the browser. Sure we’re had great books about web performance (High Performance Web Sites is my favorite), but we haven’t had anything specific to JavaScript. Now we do.

Nicholas is widely known as one of the best minds in the JavaScript world today. He joined Yahoo! in 2006 as a front end engineer and has been working on one of the most trafficked pages on the interwebs, the Yahoo! home page. His blog (nczonline.net) is a treasure trove of information on all things JavaScript & web performance. Some recent gems include Interviewing the front-end engineer & Writing maintainable code. It goes without saying that he knows his stuff when it comes to JavaScript & performance. As his books and blog posts have shown, he’s also a very skilled technical writer, keeping topics fresh, concise, & relevant.

I’m writing this as I read along, so the verbosity of this post is due to me taking reference notes on interesting things as I go.


Chapter 1: Loading & Execution

Nick doesn’t waste any time getting into what the reader wants, fresh tips! Right away we begin to learn the specifics of how browsers react depending on where & how you include your JS. There are many ways that work, but few ways that work well.

Specifically:

  • Why is it important to put your <script> includes just above the closing </body> tag?
  • What is the browser doing while loading those external files?
  • Why should you put all your in-page JS code above your CSS includes? (A: If you put it after a </link> tag referencing an external stylesheet, the browser will block execution while waiting for that stylesheet to download)
  • How you can use the defer attribute in <script> tags to delay non-essential execution of code.
  • A thorough look at dynamic script loading to import & execute your JS without blocking the browser.
  • An overview of some of the common JS loaders used today (YUI3, LazyLoader, & LABjs).

While much of the content in this chapter contains common knowledge among experienced developers, it is important knowledge to cover as it serves as the foundation for the rest of the book. Don’t worry, we’ll get more advanced.


Chapter 2: Data Access

Here’s where the sexy parts come into play; diagrams, graphs, and benchmarks! This second chapter is where you’ll learn about how exactly the JS engine accesses data depending on how you store it. The steepest learning curve within JavaScript for beginning developers is understanding variable scope. This is the first time I’ve ever come across an explanation of JavaScript’s [[Scope]] property, now all the scoping & speed issues make perfect sense!

Major topics covered in this chapter:

  • Why do global variables perform so slowly?
  • Why creating data as local variables as opposed to object properties is 10%-50% faster (depending on the browser).
  • Why is it a good idea to create local instances of global variables?
  • Why with, try/catch, and eval are bad ideas from a performance perspective. (A: they augment the scope by inserting themselves first on the tree)
  • What truly happens under the hood when a variable is found to be undefined?
  • Closure scope and why they can cause memory leaks.
  • How prototype’s work and performance issues related to traversing up the prototype chain.
  • Why is it bad to use deeply nested object members (i.e. foo.bar.baz.bop())?

There were so many “Ah hah! I get it now!” moments in this chapter for me that it alone was worth the price of the book. It took me about 5x as long as it should have to get through this chapter because I was too busy playing with Firebug as I began to learn some of these concepts.


Chapter 3: DOM Scripting

This book contains a few guest author chapters, and this is one of them. In this chapter we learn about DOM scripting by another Yahoo, Stoyan Stefanov.

Many web developers don’t understanding what exactly “DOM scripting” is, even though they likely do it on a daily basis. Many could tell you what the acronym stands for and that it represents the structure of an (X)HTML/XML document, but most don’t know that it also represents the API part of how you interact with the document. When you are using document.getElementById(“foobar”) or myelement.style.color = “blue”, you are utilizing a DOM API function accessible via JavaScript, but it has nothing to do with the ECMAScript (aka: JavaScript) standard.

This chapter is chalk-full of great best practices & overviews of DOM principles. The first thing we learn is that accessing the DOM is so slow because we’re crossing the bridge between JavaScript and native browser code. Jumping between the two is expensive, and should be kept to a minimum. There are a lot of tricks & tips that are very under-utilized by most developers when DOM scripting.

For example:

  • Using the non-standard innerhtml is way faster than creating nodes via the native document.createElement() method.
  • When looping through a NodeCollection you should cache the length of the node in a local variable because it’s own length property is very slow.
  • Iterating through nextSibling() can be 100x faster than using childNodes()

This chapter also goes into a detailed explanation of what repaint & reflow are, when they occur, and how understanding them will improve your application performance. The realization I had after reading the R&R explanation is we do stupid stuff all the time simply because we don’t understand how the browser renders and updates our pages. You know how you’ve always heard using margin-left & margin-right as separate styles is a bad idea? Well, here you find out why. Oh, and did you know there was a cssText property you can use to batch your CSS modifications? I didn’t.

As JS libraries get more & more popular, knowledge of good DOM scripting is becoming increasingly rare. Take event delegation for example. Many developers just presume jQuery’s live() or YUI3’s delegate() methods are just magic. They’re far from it, and are actually easy to understand concepts. When interviewing candidates for front end jobs at Yahoo!, this is one of the primary concepts we expect candidates to understand. They may have never used it, but the good ones will figure it out as they are whiteboarding and we walk them through the challenges.

JS libraries are awesome, but it’s because they abstract out the cross-browser differences & fix a flawed language, not because they allow you to forget what it actually going on under the hood.


Chapter 4: Algorithms & Flow Control

Chapter 4 kicks off with a quick overview of the 4 different types of loops in JavaScript (while, do-while, for, for-in). The first 3 have equivalent performance, but for-in is the one to watch out for and should only be used when iterating an unknown number of elements (i.e. object properties). We then learn about important concepts like length caching and various other optimization techniques focused on reducing the number of operations per iteration.

Next up are conditionals, such as if-else and switch. We learn when it is appropriate to use each one, and when they can be ditched for a much faster method, like using arrays as lookup tables.

Finally we come to the topic of recursion. We skip the basics of “What is recursion” and jump straight into browser limitations with call stacks and advanced recursion topics such as memoization to cut out the fat in your stack.

Since the majority of our time spent coding is inside of loops, conditionals, and (if we really want to optimize) recursion, this chapter has great, basic information on efficient shortcuts that will set you apart from the other developers on your team. Techniques learned in this chapter extend beyond the scope of JavaScript and can be used in just about every other programming language.


Chapter 5: Strings and Regular Expressions

Another guest author chapter, this time by regex guru Steve Levithan

Along with loops, another very common task in JavaScript is string manipulation, most commonly one by concatenation or regular expressions, so it makes sense to have a whole chapter to itself.

When most people start out with JS, their concatenation method is likely var str = “foo”; str = str + “bar”; //str = “foobar”, then they discover the += operator and it becomes var str = “foo”; str += “bar”; //str = “foobar”. It turns out that one of those is more efficient when it comes to memory usage, and it happens to not be the latter. This chapter provides some memory allocation table diagrams to explain the difference and how different browsers perform that operation. It should also be noted that another alternate method of concatenation, [‘foo’,‘bar’].join(‘’); is the preferred method in IE 6 & 7, so that should be considered depending on your userbase.

The second half of this chapter covers regular expressions, which usually make me cringe. I have no problem writing them, but they’re an absolute nightmare to maintain sometimes. Douglas Crockford has a saying, “If a regular expression is longer than 2 inches, find another method.” I couldn’t agree more.


In this review, I only covered the first half of the book. Here are the remaining chapters:

  • Chapter 6: Responsive Interfaces
  • Chapter 7: Ajax
  • Chapter 8: Programming Practices
  • Chapter 9: Building and Deploying high performance JavaScript applications
  • Chapter 10: Tools

If you like what you’ve seen so far, go buy it!

Return to Sunnyvale

So right now I’m sitting in a booth on the Yahoo! campus, the same booth where I set a goal 20 months ago that one day I’d work for Yahoo! and….

[Wavy distorted omg we’re going into a flashback. Begin narration]

My first experience on the Yahoo campus was for Y! HackDay 2008. I remember coming to the campus, being totally lost, and overwhelmed, almost like your first day of High School or College. I wasn’t an employee or anything. I was just a dumb programmer who wanted a taste of what Silicon Valley was really like. Seriously, I come from the startup world in Kansas City, I was in absolute awe of the place. This is where the Internet happens. Holy shit.

I came to HackDay armed with an idea for a hack to build, but was totally unable to focus, so I just sat around, tweeting, talking, and having fun. The music, the hacks, the food, the beer. I was totally awestruck when I talked to someone who worked at Yahoo!, especially the ones working on products I had used. I knew at that moment this was a place I’d always strive to work at. I knew I just had to work here, and be the person on the other end of that conversation. Through the course of that weekend, I met a recruiter who for one reason or another took interest in my skills and said he’d follow up with me. I didn’t expect he would and he was just being nice. A couple weeks later I got a call from him stating he was interested in setting up an interview. I was shocked. “Ok, yeah, umm.. sure, anytime” I was so nervous before that first call. I reviewed just about every book I owned on programming, and I own a lot. I got the call and was speaking with an engineering manager who started asking me all sorts of questions about web development. In retrospect, I totally bombed it, and knew it. Rejected.

Down, but not out, I was focused, I knew it was attainable, but I just needed more time. So, over the next year I did just about everything I could to get my skills up to the level they needed to be for another crack at an interview, always keeping that original interview experience in mind. I had a blueprint. A plan.

A year later I got an email… “I’m back at Yahoo! Want another interview?” It was the original recruiter. “Yeah, absolutely.” The only goal I had this time was getting further than the first. I wouldn’t be totally bummed out if I didn’t get the job, but I at least wanted an on-site interview, just as validation I was making progress. Off I went, studying my ass off for about a week straight, so focused on the lone objective of nailing that phone-screen. The phone rang, and we started chatting. These questions were totally different from the first time. But that’s ok, I knew them. Apparently I did well, and I got an on-site.

The on-site (at the Santa Monica office) went well, and I got an offer. It was a big step leaving Kansas City, but one that I’d always regret if I stayed. So off I went, off to sunny SoCal. I started at the Santa Monica office working with the Entertainment team in November. Due to some mix-ups, I never did make it up here to Sunnyvale for training & orientation. Beyond that, there was never much need for me to be up here in person as we have tele-conferencing equipment galore, and these virtual meetings are in our DNA because we have offices around the country, and around the world.

So 5 months go by and I finally get up here for my first time. I’m actually glad I didn’t get up here before. I get to experience my first day at Yahoo, twice. I knew it was going to be weird, a good weird, and I knew that first time I came here was going to start flashing back. So here I am, sitting in the same booth, sipping my (free) mocha cappucino, admiring the courtyard, the weather, and the conversations going on around me. This is awesome. I have somewhere to be right now. But, nope…

If you haven’t set goals for yourself, do it. Set big ones. Set life-changing ones. When you achieve those, set higher ones, and just keep rolling. If you don’t have goals, find them. I stumbled across this one because I saw a tweet about HackDay, thought it sounded fun, and stepped on a plane to fly out here almost 2 years ago. Random. Lucky… Bold.

It’s feelings like this that you wish you could just bottle up and relive whenever you want.

So, I guess that’s the reason I’m writing this. A 30 minute slice of awesomeness, carved into this blog.

Dear Twitter, I Quit

As Twttr (sic) celebrates its 4th birthday, I figure it’s as good of time as any to blog about something I’ve been thinking for a while.  No, don’t worry, I’m not going to quit tweeting, but I will quit competing. Which sadly, is probably what they want.

Twitter engineer Alex Payne sent out a prophetic tweet last month. In this message to the Twittersphere, he basically says that Twitter.com is going to be so badass and feature-rich that you’ll soon rethink your need for 3rd party Twitter clients. This caused an uproar in the developer community as many (over-reacting) people took his comments to mean Twitter was going to try & kill off the alternative clients. @Al3x and the rest of Twitter HQ went into damage control mode to explain that Twitter wasn’t attacking alternative clients and that they were still supportive of the developer community. Hugs all around, right? No. I think most people saw the writing on the wall at that point.

I know I have. So, after 2 years of developing my own Twitter clients, I’ve decided that I’m finally throwing in the towel.  Twitter has built a great web app, so there’s little need for me to continue. There’s part of me that is sad, but mostly I’m really happy for Twitter.  Also, I’m relieved as I can now focus on something else. A little background… It wasn’t up until recently that Twitter’s own web client (Twitter.com) lacked most of the features that I wanted, so I was forced to build them on my own. I began building Tweenky almost 2 years ago and the goal was simple… create a Twitter web client that had the following features: A) A friendly Ajax interface B) Integrated searching C) Groups D) Saved searches E) Fixed the @reply problem where replies were not visible to your replies feed unless it started with “@username” F) Had other basic shortcut features (like retweet links)

When it was ready in the summer of ‘08, I released it to the wild with the help of TechCrunch and other tech blogs, who all praised its set of features. I’m not going to claim I was the only one working on such features. Most of them were just obvious extensions to how people really wanted to use the Twitter service. They would have been implemented by Twitter themselves had the service been stable enough to add feature development resources. It’s funny to think that between 2006 and 2009 Twitter.com remained largely unchanged. Why? Because they were generating too many failwhales and fixing those was the #1 priority.

By 2009, the engineering team had rebuilt Twitter into a stable platform and they were finally able to let the front-end developers loose and start working on features. First came some ajaxy goodness, then integrated searching the replies/mentions fix. Later in the year they added Lists and the Retweet feature. At that point, I noticed Tweenky started to become less & less useful. Others did too and the userbase started to decrease.

Enter 2010… The front-end team is beginning to crank out features & tweaks at a fast pace. So far this year we’ve seen hovercards, location dectection, and integrated maps. It’s finally at the point where the speed of innovative features is out-pacing what the developer community will be able to keep up with. There are still some major clients, such as Tweetie (on the desktop) that haven’t even integrated Lists yet. I’m not going to attempt to work on the hovercards or integrated maps, not because I can’t do them, but because what’s the point? I’ve actually begun using the Twitter.com web client more than my own client because it simply lacks essential features. Sure I can add them, but once I’ve completed that, the larger-than-1-person-front-end team at Twitter will have rolled out a couple more slick features, and I will always be playing catch-up.

So here’s the point of this post… I’m done. From here on out I suspect the majority of my Twitter time will be spent on the Twitter.com web client. Don’t take this the wrong way, I’m actually really happy for Twitter and the awesome front-end/UX team they’ve assembled (which includes a number of ex-Yahoo’s =D ). They’ve implemented most of the “must-have” features that 3rd party developers have been working on for years. This is a good thing because those features are now available to the majority of the Twitter userbase instead of a small portion. I suspect over the course of 2010 and beyond, the pace that we see new features will continue to increase, and with every new release, more & more 3rd party developers will cease working on their own clients. This will be a bitter pill for some in the developer community to swallow but the side-effect is they’ll be spending less time on simple, basic features that Twitter.com should have, and instead hopefully on innovative non-client apps or things completly unrelated to Twitter.

I’m mostly happy with this direction. The main reason I’ve developed Twitter clients is to geek around and gain experience in areas I feel my knowledge is lacking. I’ve never approached my client development as “OMG, I have to get as many people as possible to use this thing so I can make money and/or sell it!” I’ve never attempted to monetize my work. I’ve just approached it as there’s a certain user experience I want to have with Twitter, and if anyone else wants to join the fun, cool. No? That’s cool too. Work hard and good things will come.  Having converted the original Tweenky client from mostly PHP to all JavaScript, I’ve been able to gain valuable experience with jQuery, YUI3, & JS in general. To me, that is satisfying enough. All the JS, REST API, and scaling knowledge I gained through this process is one of the reasons I now have a job at Yahoo.

So what’s next? I dunno. If I’m spending X less hours per week trying to replace Twitter.com, I can now spend X hours working on something else. I’ll most certainly work on some non-client Twitter apps, but I’m hoping to spend the majority of my time on non-related Twitter projects. Maybe some much needed Node.js hacking? Maybe some WebOS apps? Hmmm… Stay tuned.

P.S. Tweenky has always been an open-source project. You can find the source code on GitHub. You can also find Tweenky’s cousin “Tweetanium” (a YUI3 rewrite) on GitHub as well.

Node-yql

The more I play around with Node.js, the more I love server-side JavaScript. Once you get over the weirdness of writing JavaScript outside of the browser, it feels very natural. And the bonus is that it is blazing fast.

Also, as I’ve been playing around with YQL (Yahoo Query Language) more lately, I realized I wanted to be able to access YQL data from within my Node app. So, I created a node-yql module.

Now you can do something like…

YQL.get("SELECT * FROM weather.forecast WHERE location=90066", function(response) {
    
    var location  = response.query.results.channel.location,
        condition = response.query.results.channel.item.condition;
    
    sys.puts("The current temperature in " + location.city + " is " + condition.temp + " degrees");
});
// Output: The current temperature in Los Angeles is 57 degrees

jsFiddle: A JavaScript Playground

Ajaxian had a story yesterday about a brand-new JavaScript playground called jsFiddle. A write and execute web-based JavaScript IDE is nothing new, but this is much, much more than that.

The real power of jsFiddle is that you have the option to include any of the most popular JS libraries, including; Mootools, jQuery, Prototype, YUI2.8, YUI3, Glow, Vanilla, Dojo, Processing.js, & ExtJS. This feature gives anyone the ability to try out any of these libraries without going through the task of downloading, extracting, and coded up some examples. With a few mouse-clicks you can view example snippets from any of the major JS libraries, and start editing them to see how they work.

As if that wasn’t enough, jsFiddle also includes social features that give you the ability to write a snippet, save it, and share the URL. As I was hanging out in various JavaScript IRC chatrooms tonight, I continually found myself using jsFiddle to code up snippets to answer questions. In the past, everyone would always just use Pastebin.com, but that lacks any interactive features. Now you can use jsFiddle as a replacement to Pastebin for any JS, HTML, or CSS snippets and the user will have the ability to actually edit, execute, and view the output.

As icing on the cake, you can take your snippets, copy the embed code, and paste them anywhere. Here’s a snippet that I was able to code up in about 15 minutes (writing this blog post took longer than that!) to demonstrate the power of YUI3, YQL, and the Twitter API. In this iframe, you’ll find all the JS, CSS, and HTML you need to create a simple little Twitter widget.

In all, this is an amazing product that I’ll likely find myself using on a daily basis.

Crockford on JavaScript

I just finished watching Part 1 of Douglas Crockford’s ongoing lecture series on JavaScript, and it’s fascinating stuff. A must watch for any programmer. Even if you don’t code in JS, it’s worth watching simply because this first part is all about the history of programming. (video of talk is below)

As web developers, we spend anywhere from a little bit of our time to the majority of it coding in JavaScript, but few know the history behind the language. I’m not talking about just reading the Wikipedia article and knowing that it was created by Brenden Eich at Netscape in ‘95, I’m talking about the history of where the ideas behind the language came from and everything that influenced it. Like most every language, JavaScript’s syntax and style didn’t appear out of nowhere, it was influenced by a number of different languages, and those influencers were in turn also influenced by a slew of languages.It’s easy for those of us that started programming with C (or anything after) to just look at it as the “Alpha” language and ignore everything that happened before it, but that’s missing a lot of really important history, that we, as professionals, should know. It’s like a politician in the United States just ignoring everything that happened before 1776. Learn from the mistakes of the past and spot the trends going forward and pave the best path. Crockford shows us snippets of languages that were created in the 60’s and 70’, dissects them, and explains why certain people thought they were good ideas at the time. It’s amazing to think that there was a time before modules or functions, or before we had figured out the best way to format a for loop. The history of programming languages is littered with a ton of bad ideas, but occasional brilliant ideas. Those brilliant ideas are what get refined, and lay the foundation in the next generation of languages.

Finally, one concept he goes back to over and over that I found really interesting is that programmers are a very stubborn breed. We all know this. There’s little point to all our flame wars on which language or framework is better, and most of it comes from either insecurity or ignorance. He says it takes a long time for us to evolve, and he’s right. It’s not because new ideas aren’t coming along all the time, but it’s because the adoption of new ideas only take place at each generation shift, when the “old” thinkers get replaced those with few preconceived notions. The world didn’t wake up one day and realize that GOTO statements were bad, it’s that those who supported GOTO and argued for it for a decade finally retired. Out with the old, in with the new. That’s evolution.

Anyways, I could go on and on about all the “Ah hah!” moments in this talk, but you really need to watch it for yourself. I’ll probably chime in again after part 2, which I’m probably going to watch right now. I’m excited. It’s like a sequel. “Ooo! What happens now?!”

Also, here’s the “Mother of all Demos” video he mentions about halfway through.

Amazon and Lala: What Could Have Been

lala.jpeg

It’s now been about 6 weeks since Apple bought Lala and I’ve spent some time reflecting on the acquisition. When I first heard the news, it sounded like a good fit. After-all, Lala is essentially a web-based version iTunes and has some great technology powering it. It makes sense that Apple would want to buy the next best thing and get some great engineers in the process. However, I didn’t think at the time that Apple’s strategy would become so clear, so soon. Apple’s acquisitions usually take years to come to fruition. Not this time though. TechCrunch recently reported that Apple is planning on transforming iTunes into a cloud-based iTunes.com service, and Lala’s technology is the quickest way to do that. (“Apple’s Secret Cloud Strategy And Why Lala Is Critical”).

Seeing the immediate impact Lala’s technology can have, I began to think about who else was in the bidding war for Lala? Most reports say there were multiple companies interested, so you have to assume the a few of the typical parties were involved; Google, Amazon, Microsoft, Yahoo, AOL, Facebook, & MySpace. Only a couple of those companies stand out as a great fit, Amazon & MySpace. Right now MySpace has too many problems to deal with, so that leaves just one likely suitor… Amazon. Amazon’s entry into the digital music download space has been game-changing. Prior to Amazon.com/mp3, music lovers had no where to go to purchase non-DRM’d MP3s. We were stuck in the world of buying CDs to rip, buying DRM’d tracks from iTunes, or of course… pirating music. When Amazon came into the market in 2008, their impact was immediately felt as prices began to drop and DRM began to die. This opened the floodgates to other services who also began selling non-DRM’d MP3s, and music streaming became a sustainable business model. Without Amazon’s entry, I suspect little would have changed over the past 2 years. Having competition for Apple is vitally important to the evolution of the media industry. Apple is an amazingly innovative company, but like most companies, they grow content & less innovative without anyone breathing down their back.

As painless as Amazon has tried to make the downloading process when you purchase tracks from their MP3 store, it is still not as smooth and elegant as iTunes. To add insult to injury, once the download is complete, Amazon’s user experience is then transfered over to iTunes (for most users) where the user must import the purchased files to begin listening. Amazon clearly needs to do something about this. Transferring a customer into your rival’s product at the end of the transaction process is a giant flaw in product design. They need to provide their customers a way to stay in an Amazon environment throughout the Purchase->Download->Listen->Manage cycle. This is where their acquisition of Lala would have been perfect.

Had Amazon bought Lala, they would have obtained the engineering team that is hands-down the best at building a web-based media manager. After integrating Amazon MP3 with Lala, they could then integrate the Amazon Video & Kindle management interfaces into the Lala-based manager. Beyond music, video, and books, Amazon could then begin to expand into other areas, perhaps buy a company like Roku and make their streaming video experience end-to-end Amazon as well. We could have had a real competitor to iTunes. Sadly though, Amazon dropped the ball with Lala, especially since the acquisition only cost Apple $17 million. That’s nothing for a company that just spent over a billion dollars to purchase Zappos.

We’re clearly approaching a time where our music devices are going to have constant wireless broadband connections. You won’t have to worry about locally storing music, it can all be hosted in the cloud and streamed to you on demand. This isn’t a brand new concept, but the timing is right for it to finally become mainstream. If Apple is successful in transforming iTunes into iTunes.com, unchallenged, they will likely be able to declare “game over” with music delivery in the US.

Hopefully Amazon sees the writing on the wall and steps up their game to provide a challenge. Is Spotify our only hope?