Wednesday, November 30, 2005

Google Maps, Saving Lives

Can this be true? Can Google Maps actually save lives? Maybe.

Yesterday we were talking about how Google Maps has been used rather creatively, my favorite being GMRisk. My co-worker mentioned that he was a volunteer firefighter and had thought about using Google Maps to plot out the locations of fire hydrants. This is what got me thinking how Google Maps could be used as a public service by making it inexpensive to provide dynamic maps for organizations that couldn't otherwise afford to do so.

For example, the Georgia Sex Offenders Map makes it very easy to see where these people are, and if you have children, where you might not want to move. This is the type of information that you could typically find available on the web, but it was cumbersome to find geographic relationships in the data. It was like trying to see who lives on your street by reading the phone book. The data just wasn't organized in a way that was easy to use. Google Maps is making this easier, and now affordable to do.

So again, can Google Maps save lives? Yes, I think that maybe someday it will.

Tuesday, November 29, 2005

AJAX Podlets for Lower TCO

I was playing around with this past weekend with trying to create a personalization system that could be plugged into any web site. It is basically an experiment, so it is almost all completely driven by JavaScript. One thing I found was that using AJAX it was possible for me to create, for example, a login form, that could potentially be placed on any site with almost no coding. This is because AJAX changes the way we validate forms. In this entry I will discuss the traditional way of handling a login (or any other) form, and the AJAX way.

The triditional approach to form handling is a four part coding process.

1. Display an HTML form.

Just your everyday plain old HTML form.

2. Server-side validation of the form.

My rule is that my developers MUST implement validation on the server, and OPTIONALLY on the client. Because it is possible to manipulate JavaScript run on the client, and can be turned off, there must be a server-side validation component to the process.

3. On validation error, redisplay form with pre-populated values.

If there is a validation error you need to redisplay the form. If you want to be user-friendly, you also need to pre-populate the form with the values that the user has already entered. This is where we really deviate from a clean and reusable implementation.

4. On successful validation, show "Thank You" page.

This could be an entry confirmation, the main page after authentication, or any other page that lets the user know that the process has been completed.

The Issues

The messy part in the implementation is step #3, where we need to redisplay the pre-populated form. We will need to set the values of the text boxes, write a little code to pre-check checkboxes, and functions to select the correct options in drop-down menus. This typically requires some sort of server-side template mechanism, which could be JSP, PHP, ASP, Velocity, HTML::Template, Mason, or any one of a hundred different tools.

This is further complicated if, for example, I want to place this form on multiple pages. I will need to templatize every page that I want to place the form on, which may also involve changing link if the page needs to be renamed from a .html to a .asp.

What if I wanted to use this form in multiple servers using different technologies? I might end up coding an ASP version on one server, and a PHP version on another. I would end up having to do twice the work.

What if I wanted to reuse the form, perhaps on muliple client's sites? Pure develop ment houses often have a library of prebuilt applications that can be pulled off the shelf and customized for a client. This lowers the cost of delivering a product.

The big question... How can I create a reusable component from a form?

The Answer

You can tell by the title of this entry what the answer is. It's AJAX. AJAX changes the model for creating a form by removing step #3 ("On validation error, redisplay form with pre-populated values"). When the user submits the form we can submit the data to the server using AJAX, and the server returns just the validation/submission results instead of having to redisplay the form when an error occurs.

The flow is more like this:

1. Display an HTML form.
2. User clicks "submit", data is sent to server via AJAX.
3. Browser recieves results of validation, displays errors.
4. If no validation error, browser redirects to "Thank You" page.

This solves every one of the traditional issues, and if done with care, it is possible to package the form as a component that can be simply added to any page.

A Concrete Example

For my login component I am using three JavaScript libraries:

Prototype : Form helper functions, AJAX helpers, and much more.
script.aculo.us : Visual effects.
Behaviour : Add events in a script include (i.e. clean seperation of code and logic).

Here is my HTML for my login component. I can drop this into any page.

<div id="my_login_area">
<form id="my_login_form" action="" method="post">
<div class="my_login_text">
username
</div>
<div class="my_login_input">
<input id="my_login_username" type="text"
name="username" value="" />
</div>
<div class="my_login_text">
password
</div>
<div class="my_login_input">
<input id="my_login_password" type="password"
name="password" value="" />
</div>
<div class="my_login_input">
<input id="my_login_button" type="button" value="login" />
</div>
<div id="my_login_status"></div>
</form>
</div>


Note the absense of any onclick, onsubmit, or other event attributes. The Behaviour library will allow us to add the events in our script file, so this HTML won't change. Because this is pure HTML, without even JavaScript, your designer can plug it into their design and add CSS to style the form without any help from a developer. Because the developer doesn't need to know anything about the HTML, other than that this HTML block is being used without change, they don't need to give any special instructions to the designer. This clear seperation makes development and maintenance easier and less costly.

The code below will preferrably live in an external JavaScript file, or could be in the head of the HTML page.


Behaviour.addLoadEvent(
function () {
Form.enable($('my_login_form'));
Form.reset($('my_login_form'));
Form.focusFirstElement($('my_login_form'));
$('my_login_status').innerHTML = '';
}
);

var myrules = {

'#my_login_button' : function(e) {
e.onclick = function() {
sendLogin();
}
}

};

Behaviour.register(myrules);


The call to Behavior.addLoadEvent() adds the shown function to the body onload even handler. This code uses the Prototype library to reset the form, put focus on the first element of the form, and clear any status message.

All the rest does is does add an onclick even to the submit button, which calls sendLogin() when the button is clicked. It might seem counter-intuitive to do it this way instead of just adding onclick and onload attributes to the HTML code, but again, this is to completely seperate the HTML from the code.

The last step is to write the actual AJAX part to query the server. For this login component the server will send back a simple text message containing up to 3 lines of text.

Line 1: "OK" on success, or "FAIL" on failure to authenticate
Line 2: On success this will be the value of the session cookie.
Line 3: On success this will be the page to redirect to.

Here is the code.


function sendLogin ()
{
Form.disable($('my_login_form'));
$('my_login_status').innerHTML = 'Status: Authenticating.';
new Effect.Pulsate($('my_login_status'), {duration: 7.0});

var url = '/cgi-bin/login.cgi?'
+ Form.serialize($('my_login_form'));

new Ajax.Request(url, {
asynchronous: true,
method: "get",
onSuccess: function (request) {
setLogin(request);
},
onFailure: function (request) {
$('my_login_status').innerHTML
= 'An error occured. Unable to authenticate.';
}
});
}

function setLogin (request)
{
var res = request.responseText.split(/\r?\n/);

if (res[0] == 'OK') {
$('my_login_status').innerHTML
= 'Success. Starting session.';
document.cookie = 'myLoginKey=' + res[1];
document.location = res[2];
}
else {
$('my_login_status').innerHTML
= 'Incorrect credentials.';
Form.enable($('my_login_form'));
}
}


I leave disecting the code above as an excercise to the reader, but a breif explanation will help. When the user submits the form, the Form.serialize() helper from the Prototype library creates a querystring of the form data for us so that we can send it to the server. Right before the data is sent we use the Effect.pulsate() visual effect from the script.aculo.us library to flash the status message so that the user can see that something is happening. Visual cues like this are important since we are taking a non-traditional form submission approach that the user may not be familiar with. We then send the request and set handlers to handle success and failure events.

The setLogin() function is called when the results from the server are returned. In our case a successful login will set a session cookie, and then redirect us to some other page, as specified by the server response. On autentication failure we let the user know, and let them try again.

In the end, assuming that we put all of our JavaScript in external files, we have a complete component that can be dropped into any HTML page, and solves all of our specified issues.

Monday, November 28, 2005

Ruby, the Next Big Thing?

Probably not.

A co-worker asked me today if I had heard of Ruby. A a friend of his told him that Ruby was "the next big thing". I conveyed what little I know about Ruby, and how his friend is probably a new Ruby on Rails programmer, caught up by it's success. I explained that it is unlikely that Ruby will be the next big thing.

From my personal experience it seems that new programmers who pick up a dynamic scripting language, such as Ruby, Perl, or Python, are amazed at the flexibility of the language. Often this leads to a proclaimation that it will be "the next big thing". I say this because I was one of them, a devoted Perl developer since the mid-90's. Along the way I found myself straying from my desired Perl path over to the Java track, and eventually I saw things in Java that I never saw in Perl. These things I saw are important things, especially when you start building very large applications that can't be down.

I'm sure there will be a few Ruby fans wanting to scream about now, but that's ok, everyone is entitled to their opinion. For those fans I want to say that I want to see Ruby be the next big thing, and that I am on your side. Java as a language stinks, it is long-winded, and simple things are sometimes hard to do. In Perl I can write a SOAP client with one line of code, I love that part of Perl.

Perhaps I should explain what Ruby doesn't have. For me the main two are the lack of high-end application servers and the lack of tools.

Application Servers

When I search Google for "ruby application server" I get only a few pages of results, with the only WEBrick coming up a couple of times and not much else. This is fine for a small shop, but if I have an enterprise application that needs to run across a dozen systems, then I am thinking WEBrick isn't going to be up for the task. It is also likely that for an application this large I will expect a high level of support for the product. For some reason I don't think that I will find this for Ruby.

Development Tools

For my Java projects I use Eclipse, Maven, Checkstyle, JUnit, JCoverage, and more. I use Axis, Spring, Hibernate, Struts, and more. The Java language has a rich set of mature tools and frameworks that make team programming easier, refactoring faster, and maintenance less of a chore. I am not saying that Ruby is without tools, I am just saying that there aren't as many of them. With Java, the quantity and quality of tools means that I can choose the tools that best fit my shop, and that I don't need to settle with whatever is available.

Summary

I hope that I haven't offended too many people, but a great language can't be great at everything. And Ruby is a great language, but it isn't Java, it isn't C#, and it definately isn't Lisp. It has weaknesses, and I feel that it's major weaknesses are the lack of enterprise level servers and tools. If you disagree, and have a great story on how your Ruby server handles a million page views a day, then please leave a comment, I want to hear about it.

Thursday, November 24, 2005

Google Analytics Service Announcement

For those lucky enough to have a Google Analytics account, then you should have recieved an email from Google explaining what happened, and what they are doing to fix it. For those without an account, yet still trying to get one, let me post some of the contents here.

Disclaimer: I am a big fan of Google, so I'm not tring to bash them, I'm just trying to inform. After all, I will be posting this on a free Google blog, and I recieved the email via my free Google email account, and it likely that you are finding this post via the free Google search. So thanks Google.

Now, the news.

First, due to extremely high demand, we've temporarily limited the number of new signups as we increase capacity. This allows us to focus on our primary objective--to provide a great user experience for our existing users.

Looks like those without an account will need to wait just a little longer.

The 'Check Status' button is being reworked to check for properly installed tracking code. This should be fixed by the end of November.

This only affects users that have an account, as it seems that Google Analytics launched with some broken functionality.

The '+Add Profile' link has been temporarily removed until we increase capacity. We'll alert all current users when the feature is restored.

Again, this is only for those that already have an account. This basically says that we won't be able to track any addition sites. At this time, I am currently tracking four sites, including this blog.

While we increase capacity, you may see longer than normal delays in data showing up in your reports. All data continues to be collected and no data has been lost.

If you have an account, and have been wathing your data slowlt come in, this is no surprise. The tool states that data reporting is delayed 12 hours, but in practice this is currently around the 30 hour mark.

So, the only news is that Google is letting us know that there is a problem, but really hasn't given us any idea when it might be fixed.

Wednesday, November 23, 2005

Build Your Own "Personalized Google" Page

google_personalizedOne of my favorite Google tools is the personalized home page. I use it to keep up to date on current events, email, weather, and more. If you haven't seen it, it is worth checking out. Go to google.com, and click the "Personalized Home" link in the upper-right corner. One of the "cool" features for me was the ability to drag-and-drop the various news pods around the page. This allows me to group the information in a way that makes sense to me.

Being a developer, I started thinking about how allowed users to do that with only using JavaScript. I even tried to code something similar, but didn't get too far before real work got in the way. Recently though I was looking at the script.aculo.us JavaScript library, and realized that it was up for the task. In this article I will briefly explain how script.aculo.us makes this a failry simple coding job.

First you will need to download both the script.aculo.us library, as well as the Prototype JavaScript library which it relies on. Once you download both of these, you should create a simple HTML page that looks comething like this below.

<html>
<head>
<script type="text/javascript" src="js/prototype.js"></script>
<script type="text/javascript" src="js/effects.js"></script>
<script type="text/javascript" src="js/dragdrop.js"></script>
<script type="text/javascript">

// [JAVASCRIPT]

</script>

<style>

// [CSS]

</style>

</head>
<body>

[CONTENT]

</body>
</html>

In the code above I am assuming that you placed all of the JavaScript files in a subdirectory called "js", just to keep things tidy. Note the three bold sections in the example, this designates the places where we will add our CSS, JavaScript code, and HTML content pieces that are coming up next.

The script.aculo.us library includes a Sortable class, which allows you to create a sortable area. This might seem non-intuitive since we aren't sorting anything, but in a way we are. Our goal is to mimic Google's three column layout. So what we need are three containers that contain a list of pods (or components if you will). We want to drag the pods to new locations in their own column (i.e. change the sort order), as well as be able to move them to another column. The script.aculo.us Sortable class does just this, by allowing us to change the order of elements (i.e. sorting) by dragging them around the page.

So next we need to create three columns, each with their own container element. Since we need three columns, an HTML table seems appropriate, and the container for each column of sortable items will be an HTML <td> tag. The Sortable class will need to know the id's of the container tags, so we will include id attributes in the HTML.

<table width="100%">
<tr>
<td id="container1">[COLUMN 1]</td>
<td id="container2">[COLUMN 2]</td>
<td id="container3">[COLUMN 3]</td>
</tr>
</table>

Next we need to add some content items in the containers so that we have something to move around. In the example below I am using very simple items, just simple div's, with some textual content. The code that we will be writing to make these moveable will require that they are all <div> tags, but note that the div tag may contain any additional content that you want, including images, and nested tables.

<table width="100%">
<tr>
<td id="container1">
<div class="item">Content Item 1</div>
<div class="item">Content Item 2</div>
<div class="item">Content Item 3</div>

</td>
<td id="container2"></td>
<td id="container3"></td>
</tr>
</table>

The next step is to write the JavaScript code to allow us to move the div areas within the container, and between containers. This is done by calling Sortable.create() for each of the three columns. An example of this is below, following which we will examine the arguments that we are passing to the create method. If you are following along, this code should be placed where we placed the [JAVASCRIPT] marker in the starting HTML code.

window.onload = function () {
var params = {
tag: 'div',
ghosting: true,
containment: new Array("container1","container2","container3"),
constraint: '',
dropOnEmpty: true
};

Sortable.create($("container1"), params);
Sortable.create($("container2"), params);
Sortable.create($("container3"), params);
};

We first create a parameter object to include all of the options we need. We could have passed this inline using the {} notation, but since we have three containers it is easier to use a perameter object. The tag setting allows us to specify the content tag type that is inside of our container, in our case it is a div. Setting ghosting to true will add a visual clue to the user of our application by showing them a ghosted version of the content that is being moved. The containment setting lets us specify that we are allowing dragging and dropping between containers as well as within the same container. The constraint setting allows you to specify if content items may be dragged vertically or horizontally, in our case we want both, so we set it to nothing. The last setting we use is the dropOnEmpty setting. By default dropOnEmpty is set to false, which will prohibit dragging items into a container if the container is empty, but this isn't desirable for this application.

Next come the three Sortable.create() statements. The funny looking $() function is part of the Prototype library which is the same as calling document.getElementById(), except that it is a lot shorter. I prefer using $() as it makes the code easier to read.

If you try the code as it is, you should be able to drag and drop between container, and sort items within the same container. So we are pretty much done. The only addition I want to make is to change the cursor to a move pointer when we move the mouse over any of the items, and add some color to the container div's to make them easy to see. You may have noticed that I added an "item" CSS class to the div items in the HTML earlier. Here is where we will use it. The following code gets placed in the [CSS] section of the original HTML.

.item {
cursor: move;
}
#container1 {
background-color: #e0e0e0;
width: 33%;
vertical-align: top;
}
#container2 {
background-color: #e0e0e0;
width: 33%;
vertical-align: top;
}
#container3 {
background-color: #e0e0e0;
width: 33%;
vertical-align: top;
}

Adding that, you should have a (not quite) completed personalized home page. From here you should add some additional CSS, create some real content, and maybe even create a mechanism to save the sorting changes via AJAX calls. You can get more information on other available settings and events on the Sortable page of the script.aculo.us wiki.

Tuesday, November 22, 2005

Prototype.js : PeriodicalExecuter

In a previous entry I kicked off this series of posts on the Prototype JavaScript library, with this being the third in the series. In this entry I will cover the small but useful PeriodicalExecuter class.

The PeriodicalExecuter, as of the current release (1.4.0_rc2), allows you to start a task that should be repeated some number of seconds. It does allow you to pause and resume execution by modifying the currentlyExecuting property of the instance, but there is currently no way to stop the execution timer. It also ensures that the task is running in a sychronized manner, meaning it will never be executing more than once at any given time. If, for example, the executing task is still running the next time the task is triggered it will skip that execution time and not try again until the next trigger time.

API Summary

new PeriodicalExecuter(callback, seconds)
pe.callback
pe.frequency
pe.currentlyExecuting

API Details

new PeriodicalExecuter(callback, seconds)

Params:
callback - Function reference
seconds - Number of seconds between executions

Returns:
A new PeriodicalExecuter object.

Creates and starts a new periodically executing task. The callback passed to the constructor will be executed peridically based on the number of seconds passed to the constructor. The callback function is executed as if it was a method of the PeriodicalExecuter object that is returned, so you may set properties of the object to store data between executions.

function showAlert ()
{
this.counter = this.counter ? this.counter + 1 : 1;
alert(this.counter);
}
var pe = new PeriodicalExecuter(showAlert, 10);

pe.callback

A reference to the callback function passed to the constructor. Changing this property after constructing the object WILL change the method that gets executed.

pe.callback = function () { ...do something else... }

pe.frequency

The frequency in seconds that the callback is executed. Changing this value will not alter the rate of execution.

pe.currentlyExecuting

A boolean flag indicating if the callback is currently being executed. Setting this value to true will cause the execution of the callback to temporarily stop until it is set back to false.

Monday, November 21, 2005

Prototype.js : String and Number

In a previous entry I kicked off this series of posts on the Prototype JavaScript library, with this being the second. In this entry I will cover the extensions that Prototype adds to the standard JavaScript String and Number objects.

API Summary

String  string.camelize()
String string.escapeHTML()
String string.inspect()
String string.parseQuery()
String string.stripTags()
String[] string.toArray()
Object string.toQueryParams()
String string.unescapeHTML()

Number number. succ()
Number number.times(iterator)
String number. toColorPart()

API Details

string.camelize()

Params:
None.

Returns:
String value in camel notation.

Takes the value of the String object and returns it in camel notation, meaning the first letter of each word is capitalized, except for the first. Words must be seperated by a dash ('-').

// newStr = "thisThatOther"
var str = 'this-that-other';
var newStr = str.camelize();

// same thing
var newStr = 'this-that-other'.camelize();

string.escapeHTML()

Params:
None.

Returns:
An HTML escaped string.

Returns the String value as an HTML escaped value.

var unescaped = 'escape <b>this</b>';
var escaped = unescaped.escapeHTML();

// same thing
var escaped = 'escape <b>this</b>'.escapeHTML();

string.inspect()

Params:
None.

Returns:
A quoted JavaScript value.

Returns the String value as a quoted and escaped JavaScript value. Note: I have seen some odd behavior where only the first single-quote in the string is escaped, but I am not sure if this behavior is limited to only certain browsers.

var unescaped = "escape 'this'";
var escaped = unescaped.inspect();

// same thing
var escaped = "escape 'this'".inspect();

string.parseQuery()

Params:
None.

Returns:
An Object that has a property for each query param.

Parses the target String value as a querystring, and creating an Object to hold the parsed values. Caution: If there are multiple values for a single key, only the last value in the unparsed string will be retained.

var query = "x=123&y=abc&z=456";
var queryObj = query.parseQuery();
// queryObj.x == '123'
// queryObj.y == 'abc'
// queryObj.z == '456'

string.stripTags()

Params:
None.

Returns:
String value with HTML tags removed.

Returns the value of the String object with any HTML tags removed.

// noTags == 'Hello World'
var tags = 'Hello <b>World</b>';
var noTags = tags.stripTags();

// same thing
var noTags = 'Hello <b>World</b>'.stripTags();

string.toArray()

Params:
None.

Returns:
Array of characters.

Returns the value of the String object as an Array object of single characters.

var str = 'Hello';
var strArray = str.toArray();
// strArray == new Array('H', 'e', 'l', 'l', 'o');

string.toQueryParams()

Params:
None.

Returns:
An Object that has a property for each query param.

Same as the parseQuery() method. See string.parseQuery() above.

string.unescapeHTML()

Params:
None.

Returns:
String value with HTML entities unescaped.

Takes a String value with HTML entities, decodes them, and returns the result as a new String value.
var htmlStr = '&lt;tag>';
var textStr = htmlStr.unescapeHTML();
// testStr == '<tag>'

number. succ()

Params:
None.

Returns:
The successor value.

Returns the successor value for the Number object. In other words, it returns the value plus 1.
var num = 5;
var next = num.succ();
// next == 6

number.times(iterator)

Params:
iterator - A function.

Returns:
The Number object.

Executes a specified function a number of times equal to the Number value. The iterator function will recieve the number of the current execution, starting at 0, and ending at the value of the number value minus 1.

// alert the values 0, 1, and 2.
new Number(3).times( function(val){alert(val)} );

// same thing
var x = 3;
function funcA(val) {alert(val)};
x.times(funcA);
number. toColorPart()

Params:
None.

Returns:
Hexidecimal representation of the number value.

Converts the Number value to it's hexidecimal representation.

// hex == '1e240'
var hex = new Number(123456).toColorPart();

// same thing
var num = 123456;
var hex = num.toColorPart();



Sunday, November 20, 2005

Prototype.js : Form

This is the first post of many about the Prototype AJAX library. For me it fills the same void that commons-lang fills in Java, which is that it includes functions to perform the common tasks that you seem to do over and over again. In this post I will cover the Form object, and all of it's methods. This post, and all posts about the Prototype library will be broken down into two sections, a summary of the API followed by detailed descriptions and examples of each function.

API Summary

void Form.disable(form)
void Form.enable(form)
void Form.focusFirstElement(form)
HTMLElement[] Form.getElements(form)
HTMLInputElement[] Form.getInputs(form, typeName, name)
void Form.reset(form)
String Form.serialize(form)

API Details

Form.disable(form)

Params:
form - The ID attribute of the form tag, or a reference to the form DOM element.

Returns:
Nothing.

Given a form ID or reference to a form element, will disable all fields, which will make them appear as greyed out.

Form.disable('theForm');
Form.disable(formRefVar);


Form.enable(form)

Params:
form - The ID attribute of the form tag, or a reference to the form DOM element.

Returns:
Nothing.

Given a form ID or reference to a form element, will enable all fields.

Form.disable('theForm');
Form.disable(formRefVar);


Form.focusFirstElement(form)

Params:
form - The ID attribute of the form tag, or a reference to the form DOM element.

Returns:
Nothing.

Places focus on the first field of the specified form. This is useful when, for example, you have a login page and you want to save the user an extra click by putting the focus on the username field so that the user can immediate start typing. This would typically be triggered in an initialization function triggered by a document onload event.

Form.focusFirstElement('theForm');
Form.focusFirstElement(formRefVar);


Form.getElements(form)

Params:
form - The ID attribute of the form tag, or a reference to the form DOM element.

Returns:
An array of HTMLElement DOM objects.

Given a form ID or reference to a form element, will return a list of form field DOM obejcts. Note that there is no guarantee that the fields returned will be in the same order that they are on the page.

var fields = Form.getElements('theForm');
var fields2 = Form.getElements(formRefVar);


Form.getInputs(form, typeName, name)

Params:
form - The ID attribute of the form tag, or a reference to the form DOM element.
typeName - Filter for specific input type, null or an empty string for any.
name - Filter for a specific input name attribute, null or an empty string for any.

Returns:
An array of HTMLInput DOM objects.

Given a form ID or reference to a form element, will return a list of HTMLElementInput obejcts. Note that it only returns elements that use the input tag, and will not return textarea or select fields. You can specify the input type and/or name as a filter. If both the type and name are specified, both need to match for a field to be returned.

var allInputs = Form.getInputs('theForm');
var checkboxes = Form.getInputs('theForm', 'checkbox');
var firstname = Form.getInputs('theForm', null, 'firstname');


Form.reset(form)

Params:
form - The ID attribute of the form tag, or a reference to the form DOM element.

Returns:
Nothing.

Resets the form to it's initial values. This is the same as when a user clicks on an HTML reset button.

Form.serialize(form)

Params:
form - The ID attribute of the form tag, or a reference to the form DOM element.

Returns:
A URL encoded querystring.

Loops though all of the form controls and returns a querystring value based on the current control values. This is useful when you need to pass the form values to some page other then the target of the form, as you might do in an AJAX query.

// send the form values to somepage.jsp
var params = Form.serialize('theForm');
document.location = 'somepage.jsp?' + params;





This concludes the first installment of Prototype documentation. Please feel free to comment on this entry and suggest further information that should be added.

Friday, November 18, 2005

Mashups for the Web

Mashup is historically a term used to refer to music where, according to Wikipedia, means "consists of the combination (usually by digital means) of the music from one song with the acapella from another.". Web 2.0 evangelists have picked up this term for their own use, as referenced by another Wikipedia entry, where you combine content from multiple sites in a single site. In this entry I want to talk about what this means, provide some examples, and hopefully inspire some ideas.

To get started, I think two sample use cases are in order.

Use Case #1: Joe Blog (J.B. for short) has a blog hosted on Blogger, a free service. He only has the ability to change the HTML code that is produced, and can not add any server-side component. This is probably a good thing since J.B. doesn't know the first thing about how to do that anyway. J.B. does want a cool blog though, and wants a simple way add his favorite content from other sites to his blog.

Use Case #2: Johnny Web (J.W. for short) is a web developer, has his own site, and wants to provide rich content for his small group of users. His site is very small, more of a personal web site, and only has a small number of volunteer contributors. J.W. Wants to augment his content with content drawn from some of his favorite web sites. He doesn't want his site to look like everyone else's though, so expects to tailor the external content to fit his site's design and feel.

The Delivery

Content for mashups need to delivered by some mechanism. This could be the ever popular RSS, via JavaScript, Flash, or maybe a remote API. J.B. from our use case needs a simple solution, so JavaScript or some other piece of HTML code that he can just drop into his template would be appropriate. J.W. on the other hand is a do it yourself kind of guy, and just wants the raw data, so he would most likely prefer an API. Since J.W. is using an API, he can also use some addition services that might not be available to J.B. who is only using a snippet.

Example: Google Search
(using external content on your site)

Google at some point quite a while ago made an API available to their search engine. It allows you to use web-services to send a query to Google, returning the search results. Using the API allows you to present the results on your site as you see fit. Google also supplies HTML code that you can place on your site, which includes a search box and the Google logo, but with this the results are displayed on Google's site and not yours. This is a good example of where an API may offer more than an HTML snippet.

Example: Flickr Advertisement
(providing links to your content on another site)

Flickr is an online photo library where you can upload and share pictures in your own little gallery. Flickr provides you with several options of advertising your Flickr gallery, including a Flash movie and a badge. The Flash movie is the interesting one here. The Flash movie displays a dozen pictures in a 3x4 grid, and expands each in turn to make them easier to see. The best part about it is that the pictures in the Flash movie are from your gallery! As an example, I maintain a second blog, mostly for testing, that includes a Flickr Flash.

This is all good for J.B., but J.W. wants a little more. Flickr also has a web-services API that can be used to manipulate your content on their site. This API is not only useful for delivering content dynamically on your site, but is also used by some applications, including the new Flock browser.

Example: del.icio.us Link and Tag Rolls
(displaying your content on your site)

Recently several social tagging sites have emerged, with del.icio.us being one of them. I will cover social tagging at some point, but I will describe what that is briefly here. A social tagging site is basically an online bookmark site, where you add bookmarks, share those bookmarks with others (social), and add keywords (tagging) to them. The idea is that not only does it replace you browser bookmarks, but it also makes your bookmarks searchable by you and everyone else on the Internet. del.icio.us offers link rolls and tag rolls, both of which are added to your site by adding an external JavaScript code reference. The link roll presents your most recent bookmarked sites as a clickable list so that you can share them with your visitors. The tag roll shows the keywords you have tagged your bookmarked sites with, and those keywords that are used more often are larger to visually show that there are more bookmarks for those words.

Again, J.W. is yawning here, he wants a little more control. And again, del.icio.us provides an API to allow you to manipulate your bookmarks externally. This gives J.W. the freedom he needs, and also allows a way for external applications to access your data (e.g. Flock browser). This API is also used by blinklist.com, another social tagging site, to allow you to import your bookmark list from del.icio.us.

Summary:

Mashups are about allowing your users to use your data (or their data that you are storing) on their sites. User like this because their site looks cool, and providers like this because it gives them more exposure. The big question of course, is this profitable? The answer I think is yes, although there is no evidence to support this. For those faint of heart, start small, see if your users like it, see if it draws more visitors to your site. I have some ideas for how you can track the success of your components, but that will need to wait for another entry.

Thursday, November 17, 2005

Web 2.0: Go Long!

I don't expect to cover any new ground in this entry, but instead compile and rehash what is already available in an attempt to better understand The Long Tail. The Long Tail is a prominent feature of Web 2.0.

Wikipedia states that the term "The Long Tail", as a proper noun, was first coined by Chris Anderson. The term was inspired by an essay titled, "Power Laws, Weblogs, and Inequality". In the essay Clay Shirky shows the distribution of inbound links for 433 blogs. The top two blogs had 5% of all inbound links, while the other 431 blogs shared the other 95%. The 95% is The Long Tail.

The Long Tail, as a business model, is about going wide with your product line. The idea being that the sales from the few best sellers will be eclipsed by the rest. Amazon is a good example of The Long Tail, where a vast majority of their sales come from products that don't sell well. Therefore, the Web 2.0 model is to offer a wider selection in your space, and not a focused one.

The trick of course is to make this model profitable. Adding products to your offerings can be cost prohibitive. For example, on longtail.com there is a post about The Long Tail as applied to the publishing industry. The problem for publishers is that printing a short run is more costly per book than a long run, but printing a long run on a book that doesn't sell fills expensive storage space. So popular books are cheaper to produce than unpopular ones, which is why books go out of print, it is just too costly to produce for the small demand. So the challenge is to find a way to equalize the publishing cost, no matter what the run size. If you can do that, then you will sell more unpopular books than you do popular ones.

Another example of The Long Tail is the iTunes service, where you can buy music for download. There is little additional expense between iTunes carrying an inventory of 100 songs, or 1 million songs. This makes it possible to sell not only the big hits, but a million other songs that might only sell a few copies a year. In the end, those million songs combined will sell more units than the top hits. This is what The Long Tail is all about.

One last thing I want to mention is The Long Tail Camp, an ongoing event for the next 10 years or so (the event has a long tail). This site encourages, and to a point facilitates, the gathering of groups for discussions on The Long Tail, and how it can be applied to particular industries.

Wednesday, November 16, 2005

Google Analytics: Report Overview

Updated 11/18/05: GA does allow for an hourly breakdown of traffic. Corrected the text.

After about 30 hours it looks as though data is finally available in my Google Analytics account, and I can take a proper look at the reports. The reporting site only mentions a 12 hour delay in the data being made available, but since this is a new offering, and a popular one at that, I guess we should give Google a bit of a break. After all, they are providing a free tool that isn't half bad. Anyway, on to the reports!

Note: you can click the images in this article to see larger versions in a new window.

homeWhen you log into the tool you are presented with a list of profiles that you have created. You can create as many profiles as you need, typically you will have one profile per site. The image to the right is the main entry page after selecting a profile, and gives you an overview of the traffic on your site. On the left navigation area are a list of reports that can be run on the data. You may use the drop-down menu of roles to change the list of available reports. The roles listed are "Executive" for summary style reports, "Marketer" for reports on campaigns, goals, and conversions. The last role is "Webmaster", which resents reports on browser types, flash versions, Java support, connection speed, and other useful data.

One of the nice features is the map that you see here in the bottom half of the main page, where it shows you what geographic locations your traffic is coming from. For this profile this information isn't all that useful, but it can become important if you are selling a product and need to know where your users live. I was surprised though to see visitors coming from as far away as Norway, Alaska, and Brazil.


pageviewPage Views: You can track page views (daily and hourly) basis, as well as visits (hopefully less than the page views), user loyalty, goal and conversion tracking, etc. It is hard to tell, but it seems that doesn't compare your recent numbers against historical values. This is useful so that you can compare this week's numbers to last week's numbers, and even last year's numbers. These features if required can be found on competing systems, although it is unlikely that they will ever be free.


referralThe referral report is one of the more important reports from my point of view. It shows you where your users are coming from, which could be Google, some other site, or maybe they are coming to your site directly. For me this report showed me that about 60% of my traffic was coming from researchbuzz.org. Someone who contributes to that site wrote a article on Google Analytics, and referenced an article here for additional information. It is nice to see references like this, and gives you a better understanding of how your audience finds you.


titlesThe "Content by Titles" report lists the top pages on your site, along with the number of visits, views, average times, and exit percentages for each page. In this report it looks like the Google Analyitics article that I wrote the other day recieved the most attention, which makes sense based on the referral report we just saw. For this page it is showing an average view time of about 3 minutes, with a high exit rate. Again, it makes sense since most of the visitors only came to read the information about Google Analytics.


flashThis report is typical of a lot of the "technology" reports that Google Analytics supplies. This one in particular is showing what version of Flash my visitors are using. This is especially important if you use Flash on your site. The last thing you want to do is put a Flash 8 movie on your site if only 20% of your users have already installed this version. Users typically don't like it when they visit a web page and need to upgrade a plugin, change their monitor resolution, change their color depth, or install a different browser to view your content. All of this information is available here, and with this data you can make sure that your site is usable by your audience.


In summary, Google Analytics is nice for a site with small to moderate traffic. It has a large number of reports which are easy to navigate. It contains a lot of cool features for marketers, including goal tracking and conversion rates. All of this is delivered by Google for the bargain price of $0 as long as you report less than 5 million page views per month. For sites that recieve more traffic than this, you may be better off with an alternate solution depending on your budget. Sometimes you really need to know how much traffic you have now, and not 12 hours from now. Sometimes you have sepecific marketing concerns that require additional report types. Perhaps these features will be available in the "pro" version of Google Analytics, that I can't say for sure. What I can say is that I like it, thanks Google.

Size Matters: Large HTML Select Lists Part 1

I am a believer in the 100K rule, no HTML page should be more than 100K in size, including images. This is a little on the big side for modem users, but loads quickly for DSL and Cable users. Now imaging that you are working on an HTML form that has very long select lists, with options numbering in the thousands. Now imagine that this same list, with same options is repeated multiple times on that page to allow the user to select multiple items. Of couse it seems like we should be using a select list with multiple selections allowed, but with a list so long this presents usability issues, which is why the select list is shown multiple times. Now imagine that the page weight is 1.7MB. Gasp! I will provide some solutions to this problem.

Check the Space Count

We first want to determine the number of spaces in the file. With this many options it is easy to accidentally use a lot of spaces for indenting, especially if the select list is generated dynamically. This Perl one-liner will give report on the number of leading spaces, as well as the number of newlines in the file.

$ perl -e 'while(<>){/^(\s+)/;$l++;$s+=length($1)}
print "speces: $s\nlines: $l\n"' TheFile.html;

speces: 9535
lines: 2322

Well, it looks like we have only 10K worth of spaces, which isn't much, but seeing that our goal is a 100K file, we will need to see if we can reduce that somewhat.

Look For Redundant Code

Next we want to look for anything that is redundant, which is a manual process. Spend five or ten minutes looking through the HTML code for anything that is used more than once. This might includes <font> tags, style attributes, and extra JavaScript code. If you have too many font tags, perhaps using CSS in their place will save you some space. If you are using a lot of style attributes, maybe you can save some space by using a CSS class. In our case, there is a JavaScript function that is repeated multiple times in the document, so we will fix the generating code so that it only returns a single copy of the routine. This saves us about 5K, again not much, but it is simple to reclaim.

Look for Redundant HTML

This is where we can save a lot of space. We have a select list with thousands of options, and most of the tag is repeated over and over. The sample below shows the repeated code in green.

<option value="red">red</option>
<option value="blue">blue</option>
<option value="green">green</option>

We can replace this redundant code with some smart use of JavaScript.

<script src="js/prototype.js" type="text/javascript"></script>
<script type="text/javascript">

$('selectDiv').appendChild(newSelect('s1'));

var list = new Array('red', 'green', 'blue');

for (var i = 0; i <>
$('s1').appendChild(newOption(list[i], list[i]));
}

function newSelect (id)
{
var result = document.createElement('select');
result.setAttribute('id', id);
return result;
}

function newOption (val, name)
{
var result = document.createElement('option');
result.setAttribute('value', val);
result.appendChild(document.createTextNode(name));
return result;
}

</script>

Note that I am using the Prototype library, which makes things a little easier to do, and cleaner to code. The function $(id) returns the DOM node with the specified id, and $F(id) does the same for form elements making it easy to get and set values.

So this code, instead of using the tags, builds the DOM tree dynamically when the page is loaded. For a very long list this will save a lot of space. We are saving 24 characters for each option this way, for a savings of 24K per thousand options.

Also note that in our example list that the option values are the same as the option text. So not only are we saving space by reding redundant HTML code, but also by rem oving the redundant option values. In our exaple this saves us an addition 4 characters per option, but in practice this will most likely be a lot more.

In the next article I will get a little more creative with the JavaScript code to save a lot of additional space.

Tuesday, November 15, 2005

Google Analytics - Still Waiting...

It's been nearly 24 hours, and still my GA account is not showing any traffic. The system tells me that my tags are installed properly, and that I will recieve data in 12 hours... which is what it told me 20 hours ago as well.

Unfortunately it seems that other are having problems as well. I can only hope that this service will be fixed shortly, and that it will deliver what it promises. We expect a lot from Google, mostly due to past performance, which I think is why I expected a flawless launch. Perhaps I expect too much from Google (...although their blog service is flawless...). I guess we will see in the days to come.

Other related links:

Google Analytics - Having a bad first day?
Google Analytics Off To Rough Start

Digg.com - The Virtual Water Cooler

"Hey Luke, where did you get those new tires for the General Lee?"

"I got them down at Cooter's. Aren't they sweet."

We all pick up things by word of mouth. We might learn where we can get the best price on a product, where to find some funny podcasts, or find out who might know the answer to some question that we have. About a year ago a site, digg.com, was launched which provides such a service on-line.

Digg.com works just like it does around the water cooler. If you know something that you think is interesting, you share it with others while you are waiting in line for some water (or at the microwave, refrigerator, etc). If the people you told thought it was interesting, then they tell some people, and so on, and so on. Digg.com allows users to submit stories, and other users either digg it or ignore it. If users digg the story it's digg count will rise, making it more visible on the site, and eventually if it is dugg enough, it will end up on the home page causing a digg effect.

The digg effect occurs when a story gains enough diggs to move it up the ladder to the digg.com home page. Veing up front and center the story will gain even more notice, and will be visited by digg visitors. Some have said that you can expect a jump of 5,000 to 10,000 page views per day, and for some dugg sites this could be 10 times more than their usual traffic. Some webmasters see the dugg effect as a good thing, allowing their site to be seen by many, and hopefully some of them will become regular visitors. Others see the digg effect as harmful, causing bandwith cost overruns, and complaints that digg users don't contribute to their site, either through comments or clicking on ads (sort of a virtual "wham bam...").

There are mixed feeling about the term digg effect, some outright dismiss it. I think that for large sites, a mere 10,000 extra page views per day is fairly trivial, but for small sites this is a lot of traffic, and is a real phenomena.

I for one am a hooked digg user. One of the features that I enjoy is digg.com/spy, which uses AJAX techniques to update the page every 10 seconds or so. This page displays recently dugg stories, and acts like a virtual "what is cool" gauge. I often find it entertaining to just stare at that page, and watch stories being dugg, like a meme stock ticker.

Digg is, I think, a glimpse of what is to come with Web 2.0. It is a site powered for the people, for the people. It is well recieved I think because in this day and age where people are skeptical of the media, and of our politicians, this is one place where they feel that there vote really counts. Digg also has a very low bar for entry, where you don't need to be a long time member, or have some special knowledge to become part of the community. You can jump right in, and digg stories, comment on them, and post your own. This has no doubt has lead to their success.

Monday, November 14, 2005

Google Analytics

Google just released their Google Analytics product to the public. It is a enterprise quality, fully featured analytics application, and... it's free. Well, there is small catch, it is only free for up to 5 million page views per month. Of course, if you are getting 5 million views a month, you can probably afford to pay for the service. This though is perfect for those smaller sites where an application like this is not within their budget. It's no surprise that their signup site was running very slow today, most likely being hammered by new accounts.

For those not familiar with web analytics applications, this summary should give you a good idea of what you get for $0.

Traffic Filters

Filter your traffic based on domain, IP range, and advanced regular expression matching. This is useful for filtering out your own traffic so that your reports are accurate.

Goal Tracking

Track a path that you want a user to take, Google Analytics will report on the conversion rate. Useful for tracking a sucessful cart checkout, and other predictable paths that you want the user to follow.

Track External Links

Track clicks to sites you link to. This is important if you are getting paid by a target site on a per click basis. Even if you aren't getting paid per click, it is good to see how your users are leaving your site.

Campaign Tracking

Add campaign codes in links that you send to users via email or links that you place on sites that you don't own. Google Analytics will track these campaigns so that you can get an idea as to how well they are performing.

Multiple Site Profiles

Set up multiple domains, and then set use different filters and campaigns for each.

Track Flash and JavaScript Events

This is useful if you want to track events that don't reload the page. For example you may want to not only track the page view, but also track that a user started playing a flash movie, and then track the number of users that watched the entire movie.

Track File Downloads

Track downloads like you would track a page view.

In short, it is a great service at a price that is unbeatable. I am wondering how the other players in this market are taking the news. In any case, I have added Google Analytics tags to three sites today, but unfortunately there is a 12 hour reporting delay, so I have not had a chance to take a look at the reporting tools yet. Tomorrow I'll see what the reports look like, but frankly I expect that they will be just as impressive as everything Google does.

See Also - Google Analytics: Report Overview

read more | digg story

JavaScript is the Devil!

It's Monday morning, I got 5 hours of sleep, and I'm cranky. So I want to talk today about my impossible coding session yesterday with JavaScript. I wanted to play around with some AJAX techniques, and see what I could make of it. I wanted to write something cool, something that I could actually use. The first thing that I wanted to build was a simple client-side RSS reader. It was a crazy idea on a lazy Sunday.

I kicked open my favorite IDE, Eclipse, and kicked off my project routine. I created a managable directory structure with JavaScript and CSS files in their own directories. I checked the initial structure into Subversion. Then I started coding. A div here, an onload here, and I was well on my way to having some fun.

I write some JavaScript code for my work, but my primary language for the last few years is Java, with years of prior Perl experience before that. Neither of these really prepared me for the nine hells that would be known to me as JavaScript. I have always been an Emacs fan, and I saw IDE's as bulky, and excessive. Recently though I wanted to really learn to use an IDE and see what sort of benefits it would provide. I found that, in general, I like using an IDE, or more specifically Eclipse. The refactoring tools along increase productivity, and help get the job done under budget and on time. My IDE is now a part of my toolbox.

Back to the Devil... As I was saying, I created the initial project framework, and started coding. I don't do much JavaScript coding, like I mentioned, and my JavaScript book was at work, so I relied on a few online resources to guide me through the DOM. After a few minutes I was ready to try out what I had so far. It didn't work. Hmmm... the Firefox JavaScript console showed an error. I scanned the API, made some changes, and tried again. It didn't work. Hmmm... the Firefox JavaScript console didn't show an error. It seems that I had miss named the property of an object, so I fixed the mistake, and tried it again. It didn't work. I looked at my code, rechecked the online reference, and scratched my head. Just at that moment my other computer, the one that I was using to play a podcast, started to skip, like a scratched record. I rebooted it (Windows is the Devil too).

This went on for two hours.

During this session I called my computer every name that I could think of. I even yelled, "I'm going to replace you with a Power Mac!". My computer to it... well... like a computer. It just sat there devoid of emotion. I felt like I wanted to throw it out the window, but I wouldn't do that. I knew that when I cooled down that I would still need it to do my work. Lukily my wife came home around that time, and I just walked away, the same way you walk away from a pointless fight that you know you can't win.

Writing all of this down is, I think, good therapy for me. I'm not going to let a non-typed language run me out of town! I'll get back up on that horse soon, and then I'll teach JS a few tricks!

Sunday, November 13, 2005

The New Media

Tired of reality TV? Sick or re-runs? Do you want to watch or listen to what you want, when you want? I do.

I could Tivo every show that I want to watch, which is few at this point. I could spend lots of money buying DVD's (I guess I do that already). Or I could download non-network (I am avoinding the word amateur) content from the Internet, and watch or listen to the content at my leasure via Podcasts.

I have chosen Podcasts. I am not talking about the pay-to-view podcasts, I am talking about the donate-if-you-want variety.

I don't expect to convert anyone, after all, how do you convert a fan of "Charmed" or "Trading Spouces". :) What I do want to do is lot down a lit of some of my favorites, most of them tech related (in an odd way), and some of them not intended for children.

Tiki Bar TV
Video only, comedy. Hilarious! See you at the Tiki Bar.

This Week in Tech (TWiT)
Top rated tech news Podcast, in both video and audio.

Diggnation
Weekly discussion of the top stories on digg.com.

Systm
Video series on building tech stuff. Like creating Podcasts.

Infected
This is a new tech audio show. It's ummm... different.

Looking for recommendations: I am especially interested in comedy video, like Tiki Bar TV. If you know of any, please leave a comment.

Friday, November 11, 2005

Going Too Far With Copy Protection

It was recently discovered that Sony music CD's, where you installed bonus content on your computer, was also installing a rootkit. The purpose of the rootkit was to make it nearly impossible to remove the copy protection from the PC once installed. It has also been discovered that maliciuos trojans are also taking advantage of Sony's "copy protection", opening up your computer to hackers.

To many this is just another example of the music industry taking copy protection too far, and is just another reason to not buy their product. But why would Sony do something like this? In a recent TWiT show, and ex-Sony employee told a story of when he was working at Sony, just when music CD's started coming out. He said that the CD's, which cost only 44 cents to produce, should have had a price tag of about $4. The reason, he said, for pricing CD's at a higher price than vinyl was that because they CD's still sold at that price.

Perhaps there is nothing wrong with overcharging if you can get away with it. After all, we are capitalists, aren't we? This isn't much different than when the oil companies nearly doubled the price of gasoline in the wake of Katrina. But then again, the oil companies are appearing before Congress to explain the price hikes. It seems sometimes that capitalism and ethics just don't mix. Anyway, enough politics, as this wasn't the point I was trying to make.

The point of this entry, in case you missed it, is that Sony has commited a criminal act. They have used hacker tools to conceal their software on computers. They have endangered the data on their customers computers. They have taken copy protection too far.

Wednesday, November 09, 2005

What is Your Service?

The Web 2.0 Meme Map lists as one of the core competencies, "Services, nor packaged software". So as part of reinventing your own website, the first question should be, "what is my service?". I haven't taken a formal poll, but I don't think that everyone will see their site as a service.

From a marketing perspective I feel that many companies see their site as a tool for them to advertise their product, as opposed to providing a service. For example, ever have a problem with your cable (electronics, car, phone, medicine, governor, ...) and then gone to the website for the company just to find that there is absolutely no information on how to repair the product, or even a number to call? I find this frustrating. I don't go to a website because I want to see a commercial! It's time for marketers to start thinking about their site as a service, and look at it from the viewpoint of their users, not their stock holders. In the end I feel that your site and product will do better if you provide real value.

A good example of this is Google. Sergey Brin, co-founder of Google, was interviewed at the recent Web 2.0 conference. There were many things said, but the one thing I took away from the talk was not information about any single Google product, but something larger. Sergey expressed a strong interest in enabling the user, in providing a good service. He explained that when you search for a stock quote it is likely that the first link will be to Yahoo!, but that is not because of a paid relationship, it is because Yahoo! has a high quality ticker information. When talking about their ad system, he said that they decided not to jump into banner ads, which would have made them a lot of money, but instead see if they could build something better to help enable businesses. He expressed that his service was to help the user find things, to help the user leave his site.

So, what is your service? Are your users satisfied with the service?

Perhaps the answer to that lies not in the corporate meeting room, but with the user. You need to ask the user. You need to make it easy for the user to give you feedback on your content. Some sites allow users to comment, such as the blog comments that appear after each article on onjava.com. Other sites allow you to rank the content by clicking a link, possibly presented as stars.

Achieving superior service therefore consists or two parts; first determine what your service is, and second ask your users if your service is useful.

--------------

Update 11/9/05 10:16 AM: This user first attitude is complimented by Stowe Boyd's compact definition of Web 2.0. As part of his definition he states, "Users First -- The user experience is a proxy for the user, and all of the folks I touched base with so far agree that user experience is the pivot point of everything. That means that the norms of human expectations, social interaction, and interface goals become the central motif of these apps. For example, sharing with others becomes a basic principle, not something tacked on later."

Monday, November 07, 2005

Diving Head First in to AJAX

I have dedicated the past few days to AJAX research. With a little help from Pragmatic AJAX, which did a nice job of demystifying Google Maps, I was able to create my own map application with no server-side code. It is still a work in progress, but it already has some nice features.

I also picked up Foundations of AJAX which compliments the first book nicely. This book covers tools for JavaScript that aren't all that familiar to server-side programmers. Tools like JSUnit, JDocs, and tools for debugging. This has led me to start a rewrite of the map code as an OO-JS library, and includes JSDoc tags.

So, in short, AJAX isn't something that will be here a year from now, it is something that is here now, and very usable. Apparently I missed it up to this point because I was more involved with Drools and Sprog. Time to wake up and smell the coffee.

Friday, November 04, 2005

Flash and Web 2.0

About a month ago Kevin Lynch from Macromedia gave a presentation at the Web 2.0 conference. In the talk he discussed some of the new features in Flash 8.5 and Flex 2.0. One of the themes of the talk was that Flash was built for creating animations, not applications. From experience I can tell you that writing a flash application can be a painful experience. Macromedia is planning on changing all of that.

For me there were a few important highlights.

First, Flex 2.0 will leverage the Eclipse editor platform for the editing of code, a big step in the right direction. It looks like they did a pretty good job with the editor as well. You can drag and drop components onto the WYSIWYG canvas, change their properties, and then view the generated XML code. The XML code can then be edited by hand, and code hints guide you as you add new tags and attributes. Within 5 minutes Kevin was able to build a simple application that searched his collection photographs on Flickr and displated them. This met with a round of applause.

The second important point Kevin made was that Macromedia isn't trying to replace AJAX, instead they want to work with it. Kevin presented a sample application where AJAX and Flash communicated via a Flash-JavaScript bridge. This solves the "should I use Flash or JavaScript" implementation problem... you can use both!

The last important point is the adoption rate of the Flash player. Historically it takes one year for a new Flash version to hit 80% penetration, and then 90% shortly after that. This means in about a year or so, Flash+AJAX applications should be a seamless experience for the user.

Thursday, November 03, 2005

In the Center Ring: O'Reilly vs. Schwartz

During the Web 2.0 conference this year there was a talk titled "Open Source and Web 2.0," where the panel included Tim O'Reilly, CEO of O'Reilly Media, Jonathan Schwartz, CEO of Sun Microsystems, and Mitchell Baker, president of the Mozilla Foundation.

During the discussion there was this bout between the CEO of Sun Microsystems and the CEO of O'Reilly Media.

Tim O'Reilly: "When there was a time in the 90's, when you could legitimately make the claim that Sun was the power behind the Internet, and I think you can't make that claim anymore. Linux has really taken over."

Jonathan Schwartz: "You mean RedHat."

Tim O'Reilly: "No, I don't mean RedHat. I don't think Google runs RedHat."

Jonathan Schwartz: "I'm sorry, Google built their own computers."

Tim O'Reilly: "That's right."

Jonathan Schwartz: "I don't see a lot of customers doing that."

I'm not too sure what point Jonathan Schwartz was trying to make. Is it that people that build their own computers use RedHat? Or is it that Linux and RedHat are synonymous? Or is he just trying to point out that he knows more than Tim O'Reilly?

I'm not sure what happened there, but it wasn't pretty. A serious discussion about Open Source and Web 2.0 is hardly the place for a pissing match.

Wednesday, November 02, 2005

Incubating An Idea: Springboard

An idea for a new tool manifested itself to me last night, and I wanted to make a record of it before my mind moves on to the next thing.

Springboard:

A system of pluggable components that can act at inputs, triggers, filters, activators, and outputs. This idea is very similar to Sprog, but with addition of triggers and activators. Another important difference is that a Springboard may have multiple inputs and multiple outputs.

Motive:

Promote code reuse.

Features:
  • Java based tool.
  • Configuration using Spring.
  • Different interfaces to fill different needs, a component may implement one or more.
  • Activator components execute an action, like sending an email, FTP a file, or execute an application.
  • Triggers lie in wait until an event occurs, like a timer, or when an email is recieved.
  • Input components talk to input devices, like the keyboard, a file, or IMAP.
  • Output components talk to output devices, like FTP, SMTP, or a file.
  • Filter components perform a transformation, like RSS to text.
Sample Scenario:

Trigger (timer - 1 hour)
-> Input (HTTP - fetch http://.../foo.rss)
-> Trigger (Input Changed - trigger if local copy differs from input)
-> Filter (RSS Diff - constructs a new RSS that is the difference between two RSS feeds)
-> Activator (RSS Loop - executes the next step once per RSS article)
-> Filter (RSS to Text - convert using "${title}\n\n${summary}")
-> Output (IM - send text to IM recipient list)

With this configuration it will attempt to load an RSS feed every hour, and when it changes send an IM to a list of recipients. What I have shown above is actually slightly simplified, as some of the components would require additional components for input and output. For example, the Input Changed Filter needs an Input/Ouput Component for local storage of the RSS feed.

Final Thoughts:


This is definately more complex than Sprog, but the idea is to be able to solve complex problems. Perhaps instead of using Spring for configuration, language constructs would be better.

Tags

Tuesday, November 01, 2005

Getting Ready for Web 2.0

Web 2.0 is all the buzz, and now Microsoft announces "Live Software". In this new age the Internet will become a platform, and not just a transport. Users will not only use, but will also contribute. Content will be contributed through blogs, and photographs via services like Flickr.

But what else?

Picture this...

I get home from a long day of work, and my wife just got home herself, which means that there isn't any food cooked. Well, it's getting late, so maybe we should just order some Pizza from Jack's. I pick up the phone, hit the "Find" button, and say "Jacks". The phone, which is hooked up to my computer (of course) does a Google maps search in my area for "jacks", and verbally responds, "Jack's Pizza, 125...". I hit the "Site" button on my phone.

My television is also hooked up to my computer, and by clicking "Site" on my phone the website for "Jack's" now appears on my television. Using my TV remote (with a mini-joystick of course) I click the menu button. "Ahhh... a pepperoni and anchovi calzone, perfect!". I then hit the "Call" button on my phone.

Great, the food has been ordered. Unfortunately a friend of mine, Scott, just showed up at my door. He is hungry too, so I redial Jack's and add to my order.

Scott volunteers to pick up the food, but he isn't familiar with my town. I pick up the phone again, and hit the "Recall" button (not redial). I then hit the "Map" button. The map appears on my television, and of course has pre-plotted the route from my house. Scott is off to get the pizza.

The more I think about this, it seems extremely plausable to me. The technology is already there for most of this, and it seems to only thing that needs to be "built" are the interfaces between the devices.

But I am getting off the topic at hand. As a programmer, how do I prepare for this? As a consultant, how do I prepare my client for this? It just seems that information overload is already an occupational reality, and I just wonder how much more we can handle without simplifying something. Perhaps domain languages, smart tools, or adapters are the answer. So far though I am seeing a lot of big ideas, but not much about making this easier to do.

Perhaps in this new age the "web programmer" will go away, and be replaced instead by interface specialists. These specialists will only deal with specific domains, and interface their tools with the tools of other specialists.

Too many questions right now, and my crystal ball 1.0 is on the fritz.