Spring and BlazeDS Best Practices

I recently wrote a white paper for a client to assist them in ramping up with Spring and BlazeDS Integration (SBI) best practices. The link below will allow you to download the white paper.

It contains many links to helpful downloads and assists in a "painless" install and configuration for this solution set. There is also a java and flex example app to get you started. Enjoy...

BlazeDS Best Practices

Flash Camp Atlanta

If you are looking for a great day of Flex and Adobe discussion/training sign up for Flash Camp Atlanta!

It's on Aug. 28th and starts at 8 for the special introduction course with general sessions starting at 9. You can't beat the training you will receive from these experts in Flash and RIA; plus it only costs 50 bucks!

I am planning to attend, hope to see you there; click on the image to register:

jvm.config Tuning Tip For All Server Side Java Solutions

I have been wanting to blog about an experience I had not too long ago on a project where the jvm was consistently throwing OOM errors. It had me banging my head against my desk for a few days attempting to trace where the culprit was.

Was it code related?... yes, was it bad code?... sort of, was it very very very intense code (looping over many sql calls and instantiating many objects)?... YES! The challenge here was to apply a quick band aid rather than redesign this rules engine that had the characteristics listed due to time constraints. It is important to note I was not responsible for the poorly written code 8-).

I had to first identify if there were memory leaks due to this code. To do this I utilized YourKit Java Profiler. They have .NET and Java profilers that allow you to monitor their respective runtimes. YourKit Profiler is very simple to configure within the jvm.config file (I'm not going to get into that in this blog). My point here is that I witnessed memory steadily climb and at times spike, but was only able to reclaim memory when executing a manual GC. UGGG!!! So, no memory leak, but the runtime was hanging on to what it had... Why was the GC not reclaiming memory quickly on it's own? I had all the BP jvm args of old, etc., etc. ParNewGC and RMI to no avail.

And to get back to the initial issue; the blasted OOM error. I searched high and low and identified a thread on a Sun forum that there were issues with jre 1.5 that ultimately threw a OOM error if the runtime was unable to reclaim memory during a GC within a given time frame. The workaround for this was to set a time constraint in the JVM (which didn't work) or install 1.6_10 or later. This was my first step. I installed this JRE version and pointed my jvm.config to it. The application ran fine under this JRE except that I was still seeing the memory creep to the ceiling with no reclaim.

I then read on one of Sun's GC tuning white papers the following paragraph:

The -XX:+AggressiveHeap option inspects the machine resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs. It was originally intended for machines with large amounts of memory and a large number of CPUs, but in the J2SE platform, version 1.4.1 and later it has shown itself to be useful even on four processor machines. With this option the throughput collector (-XX:+UseParallelGC) is used along with adaptive sizing (-XX:+UseAdaptiveSizePolicy). The physical memory on the machines must be at least 256MB before AggressiveHeap can be used. The size of the initial heap is calculated based on the size of the physical memory and attempts to make maximal use of the physical memory for the heap (i.e., the algorithms attempt to use heaps nearly as large as the total physical memory).

Note: -XX:+UseAdaptiveSizePolicy is on by default so I don't explicitly define it in my args.

Amazingly, once I added this to the args, removed ParNewGC (enabled UseParallelGC) the server ran flawlessly for days and days without a restart. I was serving requests into the millions without a restart!!! A partial arg list specific to these setting are below, please let me know your thoughts and concerns as I always enjoy constructive feedback.

java.args=-server -Xmx1024m -Xms1024m -XX:+AggressiveHeap -XX:+UseParallelGC -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m

Note: This was for a ColdFusion 8 instance.

ColdFusion createObject "Component" and Pathing Performance

I haven't blogged in awhile due to schedule, but had to blog this experience I recently had while attempting to stabilize an application and enhance performance.

I have always taken for granted that createObject was lightning fast... Well, as fast as feasibly possibly under a given JVM.

I think I was dead wrong and this may be an issue for Adobe to address. I am unclear on the internals of course. But I have been up against OOMs on my current project and wanted to test out using soft reference (cached objects) and use duplicate from the cache rather than createObject.

Far stretch I know... but hey it was worth a try. What was revealed was that performance was negligible and createObject was faster in some intervals. Memory behaved the same, no real bonus. So in discussion with a fellow consultant I told him I'd ship a zip to him for test purposes as he had said he'd seen a significant performance enhancement with duplicate (not true by the way).

What happened next totally shocked me. To simplify the test code I snagged my VO.cfc out of it's proper place (several dirs down i.e. sitedir, com, bus, app, model, vo.. you get the idea) and put it in the root of the calling cfm. I then removed all the pathing (dot notation) from my createObject call and executed the cfm to see if it would run ok after the change.

The test was a loop of 10k over this create object call. I was seeing execution times of around 30 seconds. When I ran the updated code it went to 577 milliseconds... I am still befuddled by this. Is there that much overhead with pathing?

I initially thought it was a mapping issue because I had been using mappings, but absolute path from root was just as slow.

Please Adobe tell me this is Sun's jvm and not your code. I know this is negligible with 100 or so creates, but imagine the boost if I did find something here.

For clarification I am running on a Mac (OS X, CF running in JBOSS), but also tested on my old Dell (XP, CF running in JRUN). I didn't see as dramatic a difference on WIntel, but my exec time went from 30 seconds to 4 seconds. I am happy with 10x faster on Windows too...

Any insight here is greatly appreciated. Example Code:

      currentTime = now();
    rqaArray = arrayNew(1);
    initTime = getTickCount();
         for (index=1; index lte 10000; index = index + 1)
    // pathing example replace vo reference with     //this pathing call com.mercer.mercerOnline.model.RQASummaryVO
    rqa = createObject("component","RQASummaryVO");
    rqa.rqaID = index;
            rqa.type = "theType";
            rqa.createDate = currentTime;
            rqa.submitDate = currentTime;
            rqa.client = "currentClient";
            rqa.clientID = "clientID";
            rqa.status = 6;
            rqa.userID = 123456;
            rqa.agencyID = "agencyID";
            rqa.predecessorID = 0;
            rqa.locked = false;
            rqa.deleted = false;
            rqa.title = "The Title "&#index#;
            rqa.policyNumber = "thePolicyNum";
            rqa.agencyName = "agencyName";
            rqa.totalPremium = "totalPremium";
            rqa.hasMessages = false;
            rqa.isAssigned = false;
            rqa.assignedTo = "";
            rqa.proposedEffDate = currentTime;
    rqa.proposedExpDate = currentTime;
    rqa.agentName                     = "agentName";

         endTime = getTickCount();

         totalTime = endTime - initTime;
<cfdump var="#arrayLen(rqaArray)#">
<cfdump var="#rqaArray[10000]#" />
<cfdump var="#totalTime#" />

Why VOs (transfer objects) are good...but they can be abused like...

any other design pattern....

Sorry for the confusing title, but long titles are rather lame. So you're a flex cf, java, or php developer and you are leveraging all the beautiful one to one mapping associated with server and client object creation.

"YES!", you said. No more guess work; my server vals returned can be readily passed around within my AS code with the ease of code insight! Ctrl-space... wow theres my property! Ok, getting tacky I know.

So we embark on our design of a sytem always using VOs no matter the cost. Eee gaadd stop now. VOs, pending on design approaches may possibly have multiple layers of nested VOs YIKES!

Everyone knows that Rambo's weapons of choice were the bone cutting hunting knife and explosive bow and arrows. But there were times when he had to pull in the heavy artillery or perform a sneak attack with a much more lightweight approach like a choke hold (ahh, the violence of my youth...).

This is why VOs can be a problem if implemented without understanding the performance ramifications that can be incurred if they are always used.

Here's a real world scenario. Requesting an array of 100+ VOs from your middle tier that each have nested arrays of child VOs. Imagine just having two child VOs and the impact that could have on performance with this approach.

You call in to pull back the parent VOs that contain an array of child VOs (say 5) that all need to get created for each item in the array. So in this process we are creating 100 parent objects and internal to each we are creating 10 child VOs. This yields 1000 objects which each need to get created and the memory and process grow each time you do so on your middle tier (now add just a few users doing this incrementally over the first couple of hours).

I'm being facetious here of course as this isn't a very high number. But why return such a dense object to the client unless you were going to use it. There is a lot of wasted horsepower with this approach. Think of the scene with Rambo emptying that M60 E4 machine gun and never hitting his target... Rambo

A better approach is to pass back a snapshot of the data directly from your middle tier and pull back its VO representation when an edit needs to be performed or the VO truly is required to facilitate a process in the application.

So if you are going to populate a grid, I don't recommend doing it with VOs and if you absolutely have to create an array of VOs understand the possible performance impact (and data stagnation) that can ensue if the VO is of a complex nature and how it can impact the health of your server and ultimately the user experience.

The J2EE Core Pattern docs on Transfer Object speak to this. Check out the "Consequences" section of caching large sets of VOs. Line from the article: There is a trade-off associated with this strategy. Its power and flexibility must be weighed against the performance overhead... THE LINK

Universal Mind is Blogging

Check it out here: UM Blog.

UM is made up of a bunch of stellar Adobe and related software IT professionals. They have some incredible information to share with regard to Flex, Flash, AIR, CF, Java, etc. and we can all learn from them. Enjoy!

Session Damage

I have written about the architectural flaws I see in various applications, many focusing on poor database design and poorly written sql. I felt it necessary to write about an issue I have seen in various web applications that unbeknownst to the developer/architect can hinder and or ultimately spell disaster for the application and the customer using the application.

The title "Session Damage" came about from this very issue. When does storing information in session scope in CF, ASP.NET, JSP, etc. become a problem? I've seen a couple of scenarios that were poor approaches to utilizing session. My first experience with this was (not having seen the actual code) when monitoring the JRUN service on a UNIX (Solaris) machine I witnessed 4 megabyte of memory peeling off the server upon every new login. Initially my mind went to the idea that there was a memory leak somewhere due to the memory bloat. On the UNIX platform this caused the operating system to dump core and restart the service when memory exceeded set thresholds. This lead to customers losing shopping cart/session data left and right. YIKES!!!!

There was one scenario where a customer had purchased $4000 worth of goods and actually took the time to call the support team and have them purchase the items because he did not have the time or patience to spend another 30 minutes selecting the items all over again... The culprit, once I got a chance to look at the code was that the application was written in such a way that upon a successful login, the system cached much of the database for each user (much of it was actually never utilized). This resulted in the memory bloat. Of course this was all done in an effort to speed performance, but regardless it was a poor approach. I had difficulty explaining what was going on to the CIO because he was unable to grasp the concept and kept saying "Memory is cheap, just buy more memory". Ouch, I had to explain a server has a maximum capacity for memory and that this would not fix the problem, just mask it for awhile.

So, the system had to be reengineered due to the memory issue and the queries streamlined to speed querying of the session data. The lesson learned here is that it is a poor approach to cache data at session to save .01 seconds of round trip time to get it from the database. An architect or developer must weigh the cost/benefit to the system when looking at this challenge. I always recommend tweaking the db so that the query search yields a timely response.

Another scenario I witnessed recently was the use of session caching associated with search results. These were very large datasets getting cached at the user level. It caused the JRUN service to bloat to 600 megabytes in no time at all if there were only a dozen or so active sessions on the server. There are times when caching search info is pertinent, but rather than caching the entire result set it might be a better approach to cache the search results unique identifiers only (array or comma-delim list) and go back to the database to pull back the details when needed. The reason the system developers built the system in this way was to facilitate pagination. The solution was to stop caching and go "round trip" to the db for this process, the performance impact was slight (.02) milliseconds difference, but it is of my opinion that even if there was a 1 to 2 second difference the user would not find issue with the search, considering the big picture of server stabilization and a more an application that no longer required a restart during peak usage do to unresponsiveness. Isn't that what we all look for in an application? One that is written once and never requires intervention? 8-)

ColdFusion Timeout

I was recently consulting with a client who was experiencing timeout issues due to long running requests. This issue has plagued their site for quite awhile. My approach to solving their long running requests was to jump into the CF logs to find out what might be the problem.

The challenge was that it was a legacy fusebox site and because of the architecture of this version of fusebox every error pointed to index.cfm. The logs were somewhat helpful with regard to the timeouts and where exactly the error was coming from at a higher level of the multiple application supporting server. Noticing that it was a particular application, I challenged the development team to wrap all cfquery/cfstoredproc requests in a cftry/cfcatch. As is the case with many CF developers (including myself when I was starting out) it is easy to take for granted that the database is always going to be stable and ColdFusion's very straightforward ability to query the database will execute with no problem.

This is a huge mistakes in application development no matter what technology you are working with. In code for the immediate application i.e. CF, java, C#, VB, etc. anytime the application has to go outside of itself to query a database, web service, shared api call, etc. this code must be contained in a try or catch because it can fail due to multiple reasons.

Back to the CF issue... I recommended that the client update the queries (no timeout for cfstoredproc, wonder why?) with a timeout settings forcing it to fail (this would identify the long running query) and due to CF throwing a time out error the cfcatch could write a very detailed description from the given cfcatch scope and then they could write the actual page and line of code where the failure happened. At this point after this code was implemented they could then place a cfthrow tag at the end of the catch to bubble up the error to a global error handler in Application.cfc so that it would be handled gracefully by displaying a friendly error page.

Sorry if I'm getting wordy... Thoughts tend to flow when blogging and sentences run-on... So you may be thinking, I have a huge application with queries all over the place where the heck do I start? If you are a dreamweaverer or cfeclipser or homesiter you have the tools at your disposal to do a sitewide search. I like dreamweaver's and homesite's search capabilities due to the fact that you can export the file list in various formats and organize a plan of attack with your team to patch up the application. You'll have your site ready to diagnose itself and in turn be able to stabilize or reengineer the failed module.

Dump a Resultset to File

This is an example of how to dynamically loop over a resultset's columns and place them as headers in a tab-delimited file. This also appends row level data in tab-delim fashion as well. I am currently on a project where we will retrieving data for import into a COTS product that accepts tab-delim formatted flat files to its repository. Grab the code here.

Copyright Strikefish, Inc., 2005. All rights reserved.