March 11, 2014

why geographers are still scared of Neural Networks?

While discussing with myself on starting a blog about GIS, I was in sort of enlightenment phase, discovering that GIS is not just ArcGIS (as I was once taught.. ) or other GIS systems (QGIS, JumpGIS or GRASS of which I was sort of scared and was just poking without a reply).I was lightened with the GISc - where Sc stands for Science - light and though that nearly every problem, if one can rethink and find the spatial axis of it, one could maybe solve it with GISc. So came the title, if we have the problem we should think in GIS, as thinking is one or the main parts of the science as such. At the same time I played with words to state that "THINKING IS." Luckily, this last meaning will stay forever, as I am less and less sure about the first one. How one can think in GIS (Seriously!)?  We can think.. mathematically, geographically..., logically, but there aren't such expressions as geo-informatically, hm..

The doubts about the SUPER-science  - GISc came just after I have started reading on Artificial Neural Networks, and while googling came across the article titled "GIS & Artificial Neural Networks: Does Your GIS Think?". Of course the GIS in that article is just for the systems, asking if your system is thinking and explaining the basics of Neural Networks without really touching GIS. Well, it was just the name that caught me: it is not us that think in GIS, it is GIS itself!? In that case of course my GIS was not thinking.. I was surprised I have not ever heard about Neural Networks in a GIS context and one can say it is my fault, but well, I am a geographer and such topic wasn't popular on the titles in geographical journals. And I have just browsed the online library of where:
There are over 90 thousands results for: neural networks, but just 3 of those are in "The Geographical Journal" and all of them are of meeting, review, ceremony.. Well, no article.  
Seems, geographers are not the ones to occupy themselves with such an issue: neural networks! Pff! Or maybe the topic is considered not suitable for the geographical journal?

Of course there are geographers, the so-called-pioneers, that try to bring novelties to the geography. Somehow not all those novelties, like Neural Networks, get accepted. 
Seems still recently, but already more then decade ago the geographer Stan Openshaw wrote the book "Artificial Intelligence in Geography" filed with such an enthusiasm! The book was sort of a result of the course he was giving back then ant the University of Leeds. Published by the same Wiley, I was browsing for Neural Networks in a journal section. And already back then, being so enthusiastic about neural network applications in geography, S. Openshaw is repetitively asking in his book with surprise why geographers aren't still using such a powerful tool for geographical problems that are non-linear as a rule, thus can't be approximated by mathematical models, nor fitted into interrelation independence assumption.

Really, why don't geographers use neural networks ? Already back then, in 1997 S. Openshaw was happily expecting the upcoming "golden age" for "Thinking GIS", "Thinking Geography" and it never came.. It is surely not because of difficulty as it is the easiest technique to use - one doesn't really need to understand much, as the technique thinks, one just needs to choose the data for the thing he aims. 
Magic hand black box!
(found on ebay)
Maybe it is all because of that mystery that is within the technique, called as black-box. One cannot control much the process of how the output is created. Neural networks do it themselves. They are simply trained by being fed loads of data and the expected result. The relations, how from input data get the expected result are constructed or learned by the network itself approximating his guesses and minimizing the error. But well, one does not even need to understand this to use it. What geographers really need, in my opinion, is simply clearly to say - a great interface and great mapping possibility, thus great graphics, or else Neural Networks will stay as scary as GRASS. And although I'm laughing here, but well, I did not meet much of geographers, especially the social geographers, using GRASS software. Sure software easy to use might not be enough (as there are some already), one would need to make it well accessible(like not too expensive) or commercialized to such extent that even small business would go crazy to have it and so it would be even taught at the university. But, of course you never know..

February 05, 2014

BIG nearby anthropogenic data of VGI to solve the consequences, not the processes

How does it come that though we are over flooded with data, there is still a lack of it?

If one just would be able to grab all available free, crowdsourced volunteered geographical information (VGI) at the time when nearly every smartphone user as a live sensor is or will soon be continuously streaming data, that data would still be just the data about us(!) and around us, but nevertheless it would be just as sequences of point data at a time and place.
The information one could get from VGI data is spatially limited in a sense of horizontal space (VGI is usually nearby surface data from the natural human natural habitats, doesn't matter created by smartphones or digitised from satellite data) and as a result in the sense of scientific domain (or space one could say). For that reason, I would dear to state that the information one can get from VGI could be classified as nearby anthropogenic and therefore can be used to analyse just anthropogenic processes and phenomenons. Nearby- anthropogenic should be understood as just that part of anthropogenic (human re-created) Earth layer which is very nearby natural human habitat (living environment). Shortly to explain: anthropogenic area can be considered from the deepest mining caves up to air plane flying heights, but none of those are the human living spaces, at least not yet.  
Nearby anthropogenic term well bounds the space from which the volunteered data is coming. That space is a natural result of being the 'common interst' of commons, the habitat. Since the data is not created on any purpose, it usually concentrates at the highly-populated habitats and correlate with the technical accessibility. I could very easily support this idea with those Open Street Map data animated videos and pictures, where one could see how the data is being collected withing highly populated areas with a great technical and Internet access and how just when the purpose comes, some natural disaster, the organisations and commons quickly 'color' the white space in the map with the nearby anthropological data like some infrastructure, points of interests and so on. You can look at well publicly known example of Haiti mapping, after the earthquake occurred, for the disaster management.

And well, despite there is so much data created everyday, it is all nearby anthropological. VGI is very useful for solving or coping with the consequences of some environmental processes within the human habitat. It is natural we want to save us and our environment, the things we see around us. Nevertheless problems we meet, let's look at the same example of Haiti, are existing/occurring within a much broader space and requires much more specific and complex data, that is often very expensive. For now no VGI can help us to explain the processes, the geographic phenomena that is not happening within people but does have an effect on them. Those far far away processes are left for the sciences concern, as it is complicated, costly and no clear use at the specific moment (like when the money is injected - looking from the economic perspective).

I wonder is there any way the VGI data will grow wider? Like will someone actually would be interested in voluntarily downloading apps or getting specific devices for specific data collection and then sharing? Could such devices be built so well that the calibration wouldn't be affected within the time, due to weather change and so on? Could the oil type or precipitation chemical consistence DATA ever become the BIG DATA, big in quantity?

September 26, 2013

searching for darkness in our night time: light pollution

Although the phrase of "light pollution" might have already slid through your ears some time, the artificial light at night is sliding through your windows to your eyes constantly!

Yes, that artificial, misdirected, obtrusive light, colouring our dark sky and hiding the constellations is called "light pollution" as it has negative effects on our health and on our ecosystem. There are various types of light pollution as there are various types of lights, leading to various consequences, but I am not going to write about it. Shortly one can loop up at wikipedia for more information.

Looking at this our created phenomenon from geographic perspective, it is interesting how the global light pollution mapping as well as non-polluted - dark-pitch area search is done.

February 12, 2013

Wikipedia lets the geo-tagging in

Once I was already writing about geo-socializing as telling your location for everyone, for a reason of creating like virtual community out of internet users with similar interests. Anyways I still would believe that "Man is by nature a social animal" as Aristotel stated.

But maybe it isn't just that anymore? Maybe now we really are approaching or comming into a cyborg stage? Forget the interests, connect to new broadband and catch the feeling of being important an known! Tell the whole world about yourself. 

Now one can do that even through a WIKI!
Have you ever wondered if there are Wikipedia articles about things near you? Well, wonder no more! Today, we present the GeoData extension for MediaWiki, which now provides a structured way to store geo-coordinates for articles, as well as an API to make queries around this information (wiki blog).
 Yes, it is experimental, and it's mostly just for seeing near-by articles. But would such aim remain in this open sphere? I wonder how will this function emerge or 'mutate'  when people will start playing with coordinates and geo-tagging in this Encyclopedia.

February 11, 2013

overlay as a core GIS concept is not so strong in humanities GIS

The overlay of different information was one of the very first concepts of GIS. Like the core function, the corner stone, the first brick <...> that engaged the development of GIS on which all GIS users stand now. And it remains one of the most popularly used method for data analysis.

This concept is explained in every basic GIS cookbook, more objectively I haven't encountered any book that would not talk about overlay presenting it as a concept, technique, procedure, method. It's always presented in a very simple way by visualizing or explaining boolean operations. What else could be said? Apart from scale, resolution, data uncertainty problematics probably not much more, until we look into the GIS for Humanities (as Standford coins) and GIS concepts start to sway..

GIS concept suitability for social or humanitarian research is well presented in a book "The Spatial Humanities" writen by geographers, historians at the same time having the informatics education. The introduction starts with pointing that "the power of GIS for the humanities lies in its ability to integrate information from a a common location, regardless of format, and to visualize the results in combinations of transparent layers on a map of the geography share by data" and sadly adding that the existing GIS software was created for environmental planning questions and now it "requires humanists fit their questions, data, and methods to the rigid parameters of the software"  which makes it challenging in the extreme fussing GIS with humanities. 

I have personally experienced that while working on cultural areas visualization. And nothing else was as hard as to find a way to visualize just because the concepts themselves were not suitable for my task!

November 22, 2012

is there a space in cyberspace for geography ?

Searching and searching for everything and anything we still haven't found and if already found then searching how to reconstruct and reinvent. Like the new urban image the WIFI-scape or Urban terrain as the Norvegian YOUrban group calls it.
That's who we want to be - the Inventors.

LightPainting WIFI from YOUrban group (
And so we need more 'space' for Inventions, and so we expand it! Expanding spaces, realities, cognition.. 
Does this expansion affect us as individuals, as society? Does it also expands our research raising new questions, opening new fields or even forming new sub-sciences or indeed it just let us create new terms and subjects to manipulate about the the same things like spaces?
Such questions grew up after reading an article about the Geography and Cyberspace by M. Graham. Indeed an interesting and easy to go article well presenting the background or the 'hi-/-story' of a cyberspace. What the most interests me is the relation of cyberspace with geography. As it's clear geographers don't seem interested in this topic (no big research done) at all, but as the author tries to prove they should be, because the cyberspace as itself is a sort of space and therefore it should also be reviewed from geographical perspective.
At that point it seems for me that the article is written in as much serious as with a slight ironic mood

October 09, 2012

digging out the edge of the hill

Recently I had a possibility and time to look practically at the boundary questions, the ones I always like to philosophy about...  I got a simple task: to calculate the volume of a small heap (much smaller then hill) having an image with elevation values (DEM) and so I just needed to automatize and to simplify the steps that were already determined by others, mostly some routine work with ArcGIS (ESRI) tools, so just a bit of patience learning to geoprocess with python.

Digging the boundary  of a heap
But before starting the calculations I had to find out where is this heap and where it's not anymore. I had to draw clear boundaries in order to start some calculations at all. Apparently there was no tool for a heap determination founded where I work, till now all was done manually, by digitizing: zooming out and in, searching out that boundary where the heap starts or ends and going around.
The human brain is great, fast recognizing the pattern it searches for. Maybe recognizing just approximately, but nicely and unconsciously ignoring all other: small depressions, pikes or elevation model noise due to calculated heights of some bushes or some artificial constructions  or photogrametrical noise...
But the problem is that when having a huge images with a great resolution it takes hours and hours to load it and while zooming back and forward trying to draw a line in order to determine "exactly" where the heap starts or/and ends. Besides, when you start looking at a greater and greater resolution the picture of  all the image disappears and for our brain it becomes hard to process just various tones of colours in pixels in front. So, I needed to find how to automatize this first step in order I could work on the calculations. And just looking rather at a simple task or indeed at the DEM (it's prety nice!),  I fastly ran into a dark dark forest searching the way how to determine where this heap finishes or starts in order to be able to define it and build the script.