Wednesday, June 2, 2010

Re: Data Visualization in a Mashed-Up World

Doing this radio show was a weird experience. I've written about Data Mashup for years, and I assumed that the other participants would be coming from a similar base of knowledge. Instead it felt like I had fallen into a time machine, and gone back to 2006. If you read the transcript you will understand what I'm talking about.

To create a data mashup there needs to be some common context, but according to the other speakers this context is limited to geography (read: Google Maps). I'll concede that universal truths, like latitude and longitude, are a convenient way to connect public data from completely unrelated sources. What I don't understand is why the radio guests (consultants and vendors) didn't consider the implications for a private organization. As I said during the show, customers and products are common contexts within a company. There is great value for a CEO to combine sales, marketing, and support information for a 360 degree view of the business. Data mashup allows this to happen, even if there's not a comprehensive data warehouse or master data solution in place.

The other interviewees also framed the state of the art as being a set of developer APIs. This also made me reminisce about the early days of mashups. Has the rest of the industry really not progressed beyond this nascent stage? It was at this point that I became a little flustered. Had none of these "experts" seen the drag-and-drop data mashup that InetSoft's been offering since 2007?

Luckily the host, Eric Kavanagh, was experienced and knowledgeable enough to understand the best practices I was putting forth. He even described my vision of enterprise data mashup (IT defining the atomic sources, meta-data, and security; and users building and sharing their own mashups) as the "ideal".

I hope that the word spreads about what data mashup can be, and we can go back to the future.

Friday, April 30, 2010

Agile BI, Not Automatic BI

Forrester's Boris Evelson recently published a paper introducing "Agile BI" with the same definition we've been pushing for years:
  1. Faster initial development
  2. React more quickly to changing requirements
The rest of the paper goes on to discuss "metadata-generated BI" which is "one such example of a new technology supporting Agile BI". These metadata-generated BI applications essentially do the same work a traditional BI environment requires (building a data warehouse), but much of it is automated. I look forward to hearing from Boris about other Agile BI technologies, because Automated BI doesn't thrill me.

By automating the initial data work, I'm sure a lot of time and effort is saved, but you may not get what you want. This is where an application like ours steps in. Instead of doing the same old thing faster, we take a new approach. We support Agile BI by eliminating the upfront effort of creating a Data Warehouse, and instead providing Data Mashup. When you have the application (reports and dashboards) designed the way you want it, then you can setup a Data Grid Cache to make it perform better. By having the data transformation be virtual, it is much faster to create and much easier to change. The tools mentioned in the paper automate the ETL work based on the metadata, before reports and dashboards are created. We automate the ETL work after the reports and dashboards are created, and only if desired.

Apparently some of these tools will also automatically generate reports based on the data. Again, is the saved effort worth anything if the finished product is not what the users want? By providing our web-based drag-and-drop tools for both interactive data visualization and publishing-quality reports, we help our customers create exactly what they want, and more efficiently than traditional tools.

Metadata-generated BI is the same old-world monolith, automated. It eliminates much of the work, and unfortunately much of the intelligence, from the process. The BI world needs better tools for skilled humans, not the same tools in the hands of robots.

Wednesday, April 28, 2010

Data Visualization in a Mashed-Up World

I've been invited to participate in a DMRadio round table about visualization and mashup. If you're interested, please visit this site to register.

Wednesday, February 17, 2010

US Job Losses

There's plenty of political posturing about the current recession and whether the American Recovery and Reinvestment Act (Obama's stimulus bill) helped. I never rely on conjecture, so I went in search of facts. These were easy enough to find from the Bureau of Labor Statistics. Then, instead of staring at raw numbers of unemployment values, I substracted consecutive values to see the differential of the Job Losses (in other words, the rate of change or velocity) from month to month. Out of curiosity, I also gathered the numbers for men and women. I plotted these on a chart, added an annotation for when the stimulus bill was passed, and then added trend lines (using least-squares) to smooth out the jaggedness. Here's the resulting graph, which you can click on to see the code for:


Monday, January 25, 2010

Wal-Mart Stores Over Time

Here's the latest visualization I created, showing the growth of Wal-Mart over the years. What's really interesting is the local focus for the first 20 years.