The FOSS4G WMS Shootout is bikeshedding. I have the utmost respect to the WMS Shootout/Benchmarking participant teams, but the exercise is pointless. There should be a new and more relevant competition for web map servers. There should be Thunderdome.
First things first, this is lower case wms — web map server, not the deceased Web Map Service specification. Similar to the Programming M*therf*cker manifesto, we do one thing in Thunderdome: we serve maps.
Second, there is no such thing as a level playing field in the real world. Assuming all things being equal and spherical cows is fine for theory, but the world doesn't work that way. So the first rule is no limits on technology or architecture, if a vendor wants to spend a million dollars, no problem! The catch is that the architecture, tweaks, and costs have to be disclosed. To make this a little more realistic, the infrastructure has to be deployed on approved Amazon Machine Images (AMIs) and the billing has to be publicly available. So deploy one machine or deploy a cluster of a 100 machines, doesn't matter. We want to see the price/performance ratio.
Third, all participants must serve the same data set. You choose the backend: file, database, or NoSQL — your choice, doesn't matter. Again your architecture is fully documented, including indexes built, connection pooling, etc. We're after configuration and maintenance information.
Fourth, the prescribed set data and maps must be served in a format or protocol supported by OpenLayers. This means no plugins. WMS, tiles, GML, GeoJSON, ArcGIS, whatever works best. Have at it, just as long as it can be retrieved by OpenLayers. Why OpenLayers? It has the best support for a variety of formats and protocols of all the javascript mapping libraries.
Fifth, users will be able to download a tool similar in design to Anonymous' LOIC (Low Orbit Ion Canon) to fully exercise each system for a week. This geoLOIC, as it were, will send the appropriate requests to the targeted system. This tool will gather real world performance across a variety of platforms and network conditions, the results will be sent back to the organizers at the conclusion of the test.
Sixth, logs for each server will record down time, response time, and other metrics. If a geoLOIC shuts down a server, the team has to get it back up. Uptime is crucial.
All, participant data will be presented across a standard set of metrics such as uptime, total response time, response time at the server and cost. All documentation, logs, etc will be publicly available.
The goal of wms Thunderdome is develop a set of statistics that implementors can use to evaluate web map servers across a variety of axis whether it be price/performance, availability/cost, maintenance/performance, etc.
Now go listen to the creepy man's intro.
Saturday, June 25, 2011
Friday, June 24, 2011
Summarizing why WMS is Dead
At the urging of @wonderchook I did an Ignite talk at WhereCampDC titled: WMS is Dead
I feel that I've flogged this issue enough but nothing like putting in finishing nails in the coffin to tidy it up.
Architecture and Scaling
WMS was designed in the late 1990's, when we really didn't understand how the web worked at scale. I think that the dominant mental model of web services at that time was RPC and it's evident in the architecture of OGC W*S. The architecture was designed to mimic GIS systems, so WMS requests are standardized RPC calls to an abstract model of a GIS. Where WMS went wrong was that it ignored existing Web standards and that the Web's architecture and infrastructure was optimized for delivering documents via links. Ignoring URIs to retrieve standard well known mime types such as pngs contributes to it's clunky interface. Retrieving tiles via URIs is a Web native operation; whilst sending long requests to create a single document is not. That being said, I believe that layering a REST interface over WMS is a futile exercise in trying to hang with the cool kids. As Bruce Sterling often says, "Good luck with that."1
Cartography
In WMS, you make make maps with SLD. I can't say enough that SLD is a bad idea. XML is for data, it should not be perverted in a single configuration, query, and pseudo programming language.
Projections
Projections have been cited as one reason WMS is superior to tile based mapping. My response is that Web mapping is a different medium from traditional cartography. In traditional cartography, the map maker has only the X and Y dimensions to convey information; which makes projections one of the most important tools for the cartographer. In web mapping, we have n dimensions to present information from info bubbles to interactive tools. Projections are still important for operations that require measurement, but that occurs on the backend and is less important for presentation.
Interoperability
WMS fails at interoperability because it is implemented at the application level. Chaining together a bunch of applications through a series of client requests does not result in a performant web application. In addition, WMS stateless (request/response) operations bypass web native optimizations such as HTTP caching; which has led to software such a MapProxy to overcome this shortcoming. Again this is a bandaid solution to a deeper architectural problem.
Interoperability should take into consideration how the Web is designed rather than providing an overlay that implements loosely-coupled client-server like architecture over the Internet. This means that interoperability should occur at a much lower level than application to application communications, such as data and document types.
I think that the prevalence of tile servers proves the point that URIs plus well known mime types trumps Web RPC. Data is out of the scope of WMS since it is primarily designed as a presentation interface (with lots of kruft), but I think it is important to the future of Web mapping as well as interoperability.
When the W*S standards were under initial development, OGC made a bet that XML would rule the day; which is understandable since XML was the new hotness at the time. Implicit in this assumption was that XML parsers would improve, especially in the browser. However, as this blog explains, developers are tired of writing XML parsers and use JSON instead. JSON is a well supported and compact way to serialize data that trusts that the developer will use good taste to unpack the data instead of mandating the download of XML schema. The rise of fast and powerful Javascript engines as well as server side Javascript such as node.js was certainly not predictable. This does not bode well for W*S standards, which will be overcome by events even more quickly. For the INSPIRE folks, "Good luck with that."
I feel that the direction of Web mapping will be tied to Javascript on both the client and server side. We are already seeing this with the popularity of TileMill which uses node.js and client scripting libraries such as Leaflet and ModestMaps-JS. Like it or not, this direction in Web mapping is organic instead of top down from a standards body.
1In all fairness, WMS was designed to retrieve a map dynamically based on a user's request. The use case was not, "I want a fast background map with data sprinkles"; which is the real world use case.
I feel that I've flogged this issue enough but nothing like putting in finishing nails in the coffin to tidy it up.
Architecture and Scaling
WMS was designed in the late 1990's, when we really didn't understand how the web worked at scale. I think that the dominant mental model of web services at that time was RPC and it's evident in the architecture of OGC W*S. The architecture was designed to mimic GIS systems, so WMS requests are standardized RPC calls to an abstract model of a GIS. Where WMS went wrong was that it ignored existing Web standards and that the Web's architecture and infrastructure was optimized for delivering documents via links. Ignoring URIs to retrieve standard well known mime types such as pngs contributes to it's clunky interface. Retrieving tiles via URIs is a Web native operation; whilst sending long requests to create a single document is not. That being said, I believe that layering a REST interface over WMS is a futile exercise in trying to hang with the cool kids. As Bruce Sterling often says, "Good luck with that."1
Cartography
In WMS, you make make maps with SLD. I can't say enough that SLD is a bad idea. XML is for data, it should not be perverted in a single configuration, query, and pseudo programming language.
Projections
Projections have been cited as one reason WMS is superior to tile based mapping. My response is that Web mapping is a different medium from traditional cartography. In traditional cartography, the map maker has only the X and Y dimensions to convey information; which makes projections one of the most important tools for the cartographer. In web mapping, we have n dimensions to present information from info bubbles to interactive tools. Projections are still important for operations that require measurement, but that occurs on the backend and is less important for presentation.
Interoperability
WMS fails at interoperability because it is implemented at the application level. Chaining together a bunch of applications through a series of client requests does not result in a performant web application. In addition, WMS stateless (request/response) operations bypass web native optimizations such as HTTP caching; which has led to software such a MapProxy to overcome this shortcoming. Again this is a bandaid solution to a deeper architectural problem.
Interoperability should take into consideration how the Web is designed rather than providing an overlay that implements loosely-coupled client-server like architecture over the Internet. This means that interoperability should occur at a much lower level than application to application communications, such as data and document types.
I think that the prevalence of tile servers proves the point that URIs plus well known mime types trumps Web RPC. Data is out of the scope of WMS since it is primarily designed as a presentation interface (with lots of kruft), but I think it is important to the future of Web mapping as well as interoperability.
When the W*S standards were under initial development, OGC made a bet that XML would rule the day; which is understandable since XML was the new hotness at the time. Implicit in this assumption was that XML parsers would improve, especially in the browser. However, as this blog explains, developers are tired of writing XML parsers and use JSON instead. JSON is a well supported and compact way to serialize data that trusts that the developer will use good taste to unpack the data instead of mandating the download of XML schema. The rise of fast and powerful Javascript engines as well as server side Javascript such as node.js was certainly not predictable. This does not bode well for W*S standards, which will be overcome by events even more quickly. For the INSPIRE folks, "Good luck with that."
I feel that the direction of Web mapping will be tied to Javascript on both the client and server side. We are already seeing this with the popularity of TileMill which uses node.js and client scripting libraries such as Leaflet and ModestMaps-JS. Like it or not, this direction in Web mapping is organic instead of top down from a standards body.
1In all fairness, WMS was designed to retrieve a map dynamically based on a user's request. The use case was not, "I want a fast background map with data sprinkles"; which is the real world use case.
Sunday, June 19, 2011
Recovering the Google I/O Samsung Galaxy Tab 10.1 on OSX
Short Story
Several days after an OTA update to Honeycomb, my Google I/O Galaxy Tab crashed and went into a constant reboot state. These are the steps I used to recover. Note that this will delete all your personal data in the /data directory
If you need to recover files, you can used the Clockwork Recovery image instead of the stock recovery image.
Mount the /data directory from the Clockwork Recovery meny and navigate to the directory containing the files you want to retrieve. You can then use adb from the Android SDK to retrieve the files.
Several days after an OTA update to Honeycomb, my Google I/O Galaxy Tab crashed and went into a constant reboot state. These are the steps I used to recover. Note that this will delete all your personal data in the /data directory
- Download fastboot, you will need this tool to get to the recovery screen. Unzip it and make sure it is executable:
chmod 755 fastboot-mac
- Put the Tab into recovery mode by holding the Power button and Volume Down. The screen should look like this:
- Select USB by clicking on Volume Down, click on Volume Up to confirm. Plug the Tab into your computer
- Download recovery.zip and unzip it
- Fastboot the Tab (I used the parameters from here):
./fastboot-mac -p 0x0700 -i 0x0955 boot recovery.img
- On the Tab, use the Volume control to a factory reset. This will reformat /data — note that all your data will be deleted.
- Reboot the Tab.
Long Story
I tried a number of methods to recover the Tab without losing data. I tried to use Droid Basement's Guide to recover the Tab, but I was stymied by /data. When the Tab booted using fastboot, the /data directory was mounted read only. I tried mounting /data rw numerous times via adb shell but every time I tried to write to /data it immediately reverted to a ro state. Reformatting /data seemed to the remaining choice.
If you want to do more with the Galaxy Tab, I highly recommend Droid Basement for hacks and images.
UPDATE 4/19/2012
UPDATE 4/19/2012
If you need to recover files, you can used the Clockwork Recovery image instead of the stock recovery image.
./fastboot-mac -p 0x0700 -i 0x0955 boot recovery-cwm_4.0.0.4-sam-tab-10.1.img
Mount the /data directory from the Clockwork Recovery meny and navigate to the directory containing the files you want to retrieve. You can then use adb from the Android SDK to retrieve the files.
./adb pull /data/media/DCIM/Camera/20120415_191707.mp4
Subscribe to:
Posts (Atom)