First two talks were great (Jay Philips on Adhearsion, Dan Grigsby on hacking the market, basically). While on different topics altogether, both were plugging a common theme here: keep coming up with new ideas, set out on your own, and if you look for the niches and cracks you can fill in the market, you can exploit/hack those efficiencies to your advantage and really make it as an indie with a few friends and not work for the man, so to speak.
Which we already knew (or at least hope) is the case, but it’s awesome having these folks sharing their experiences, ideas, and methods of doing it themselves. It’s basically a lot of people who have that kind of “hack everything” mentality that is the focus of the HOPE/2600 folks and apply it to the question, “how can I set out for myself as a hacker and make a living?”
Also: the memcached talk was a totally excellent geek-out on caching.
Next talk is called “Living on the Edge,” and it’s about what’s going on in Merb edge development, but you see how it fits the broader theme by title.
There’s a great community here in Toronto, looking forward to meeting some New Yorkers and seeing if we have this kind of community back in NYC and Jerz.
More to come.
I’m going to break up the Rubyfringe blogging for a minute to stump on a topic that’s close to our hearts here at Zetetic.
When it comes to building a new web application our designing decisions related to user accounts should be driven by a common paradigm: a user object should represent a real life human being. That user object can then be associated with any number of other entities. Fun! Flexible!
You can see this at play in our time-tracking app Tempo: every user has only one account, and can be associated with any number of other people’s projects. It’s a simple many-to-many relationship, a social network relationship really, and it seems like an obvious and straight-forward design decision — but that’s not how a lot of web applications work.
Many really good systems out there have made the decision to build their web applications around companies or organization and not people. From there, people are users in the company (as in “assets” or “resources”), and are walled off from other people in other companies using the same web application. In some cases they even log into different sites! This can require a real flesh-and-blood human being to have multiple accounts, one for each company they work with that uses the system.
This paradigm can provide some conveniences (although none of them unique to it other than the feature of company-as-corral), but it can cause issues in the real world when flexibility is severely limited by the “company” wall. If Tempo followed this paradigm, our users, many of whom work with many different companies, client, and subcontractors, would be at a real disadvantage having to maintain numerous accounts and log into different sites just to summarize their full monthly billing!
Many folks who are independent subcontractors or consultants love to use Tempo because it allows them to work with many customers and partners without having separate logins. They can log in and report on all their time from one place.
Social networking applications, however cliché it is to laud their innovation nowadays, are really paving the way when it comes to recognizing that organizations and companies are just groups of people. They are defined by relationships between people, not a fixed hierarchical structure where the organization is the center of the universe and people are bolted on ad hoc (or grown like leaves on the company tree)!
Consider how useful it would be to add groupings or private channels to an asynchronous messaging system like Twitter. Everyone on Twitter has their own account. Allowing people to join groups would be easily implemented (scalability issues aside). Now imagine Twitter being based on organizations primarily, and having to have one account for each organization. Ugly, right? And unnecessary.
Stephen and I are in Toronto for the RubyFringe conference! The opening party at Amsterdam brewery was a really good time, but things really started with Fail Camp, hosted by Joey Devilla over at the Rhino. It was awesome. The level of friendliness and communication and idea brewing is very high. It’s like the humid, thick air is caffeinated.
More coherent thoughts and notes to come as the day goes on and the talks on the track progress! Stay tuned.
Update: You can find me on Facebook here, and Stephen here, any of you looking for us.
Web application frameworks like RubyOnRails and Oracle’s Application Express (APEX) are generally built as full suites providing the most common functions of web frameworks in a way that’s extensible so that you don’t have to roll your own. They are incredible products, both great for what they do. But in the real world, many businesses have widely diverse data sources in their environments, and sometimes you’ll have to mix and match.
Case in point: in one of our clients’ environments there’s an instance of APEX providing data reporting and there’s a separate web app for manipulating directory services; both are protected by a single-sign-on solution. If you log into one, you have the cookie that allows you into the other without having to log in again. A recent change requested of the web-app was that a particular text field be given type-ahead suggest / auto-complete functionality for a particular piece of data stored in the Oracle DB schema atop which the APEX app sits.
At first I looked up a number of the various posted ways on how to do an auto-complete in APEX alone, just to get the ball rolling and see what functionality was there. There’s a lot of good information out there in the Application Express section or the OTN forums. I figured at first that I ought to be able to code up the server-side processing in APEX using an On Demand process (basically a bit of PL/SQL) to respond to the very simple, one-parameter request of a type ahead. I figured on the client side I’d just use Ajax.Autocompleter from the script.aculo.us library to keep things simple for the web-app, too.
This ended up tanking due to how APEX handles requests through mod_plsql. To run an On-Demand application process the calling request has to contain a valid session ID number in the query string. Not to mention I’d have to hack up Autocompleter a bit to supply the param name and value in APEX’s tortured query string syntax (/pls/f?p:page_id:session_id::::param,param:value,value:etcetcetc).
But I didn’t shy away from the task, I set about trying to figure out how I could first grab a session ID, import it into the client side javascript and use that to dynamically construct the URL and params. It was ugly, and I never did get that session ID cleanly.
But as you can see in that thread, Louis-Guillaume suggested a much simpler solution: since we’re using mod_plsql anyway (APEX runs inside it), why not just go old school and create a stored procedure to respond to the Autocompleter call?
It’s basically the same code as the On-Demand process. You create your stored procedure, set up a DAD connection string for mod_plsql (this is done in marvel.conf or dads.conf in Oracle HTTP Server (Apache!)), and call it like thus:
http://example.com/<my_DAD_name>/StoredProcName?paramName=paramValue
The parameters should literally match what’s defined for the parameters on the PL/SQL procedure. Leaving some off is fine (as long as you set defaults and account for the possibility), but adding extra undeclared parameters causes a problem, as we’ll see in a moment.
So here’s the stored procedure:
CREATE OR REPLACE PROCEDURE autocomplete_for_empname( p_search_text IN context.val%TYPE DEFAULT NULL )
AS
v_employee_row employees%ROWTYPE;
v_count NUMBER := 0;
BEGIN
OWA_UTIL.mime_header ('text/html', FALSE);
HTP.p ('Cache-Control: no-cache');
HTP.p ('Pragma: no-cache');
OWA_UTIL.http_header_close;
IF p_search_text IS NOT NULL THEN
HTP.prn ('<ul>');
FOR v_context_row IN
(SELECT e.* FROM employees e WHERE LOWER(e.name) LIKE '%' || LOWER( p_search_text ) || '%' AND ROWNUM < 11 ORDER BY c.val)
LOOP
v_count := v_count + 1;
HTP.prn ('<li>' || v_employee_row.val || '</li>');
END LOOP;
END IF;
IF v_count = 0 THEN
HTP.prn ( '<li>No results found</li>' );
END IF;
HTP.prn ('</ul>');
EXCEPTION WHEN OTHERS THEN
HTP.p('Error: ' || SQLERRM);
END autocomplete_for_empname;
What this code does is provide the unordered list expected by Ajax.Autocompleter. We can call the stored proc like this to test the results:
http://example.com/<my_DAD_name>/autocomplete_for_empname?p_search_text=jon
And we’ll get results like:
<ul>
<li>Jonathan Myers</li>
<li>Jones Johnson</li>
<li>Jon From Garfield</li>
... &c. ...
</ul>
The constructor for the Autocompleter was an easy setup:
<input type="text" id="autocomplete" name="p_search_text" size="50" />
<div id="autocomplete_choices" class="autocomplete"></div>
<script type="text/javascript">
new Ajax.Autocompleter("autocomplete", "autocomplete_choices", "http://example.com/our_dad_name/autocomplete_for_empname", {
paramName: "p_search_text",
minChars: 3,
method: 'get'
});
</script>
There was one final hang-up. Even though I was getting the correct responses by hitting autocomplete_for_empname directly when testing from my browser, Ajax.Autocompleter was getting a 404 response. I took a look at the request being sent in Firebug, and it turns out that with every Ajax request, Prototype, the supporting library, adds an extra ‘_’ parameter with no value.
I had to open up the prototype.js library and comment out this line in the Ajax.request method:
request: function(url) {
var parameters = this.options.parameters || '';
// if (parameters.length > 0) parameters += '&_=';
Then, all was right with the world. Turns out the extra param is included to get around an old bug in Safari 2.x#.
So there you have it. If you want to do an external autocomplete against data in your APEX schema, use Ajax.Autocompleter and a stored procedure over mod_plsql.