I think the subject is sensitive, but hey, let's open the mind and think for a moment about the deceptions and lies of what is spoken there.
In the conference Where 2.0 recently passed was presented by Jeffrey Johnson and David Riallant, both of Pict'Earth; (the first developer of web applications and the second professional of photogrammetry), the posture of the work they do and of which they had spoken at the AGU Fall. Surely many of us have a similar feeling when we had to abandon those analog devices for hybrids and then digital.
Well, let's spend some time to see if we get any more confused:
1. The procedure: simplification
Basically the process seeks to do the same thing that was always done, trying to solve the limitations of the previous "technologies" (because they were technologies) ... shortening the time and equipment making use of "information technologies":
- A small remote-controlled plane, which replaces the piloted aircraft ... without having to think about fuel, per diem, pilot, permission to fly over etc. With the possibility of having a route previously drawn.
- A GPS with the ability to capture latitude, longitude and height ... is supposed to have a ground base against which to rectify the accuracy taken "literally on the fly"
- A digital camera with a resolution of many megapixels to be called "high resolution" by what the others spoke of microns. It is clear, eliminating the problem of revealing negatives, scanning them to the micron and those herbs ...
- A light computer system that can associate the coordinate with the capture in a simple kml and send it via SMS to a terrestrial operator that semi-automatically stretches the images based on certain control points from the territory or a digital model.
We are left with the doubt if they have a way to obtain the instantaneous conditions of the camera, consequence of the inclination of the plane to the moment of capture, known like alaveo, pitching and rotation but well ... we go to the following
2. The good: saving time and costs
It is clear, the first gain is time, we know that is a major problem of conventional methodology; Especially if it is done by means of a contract to a private company, depending on the amount of territory to cover or the geographical location sometimes it is necessary to wait the summer, and when there is not much smoke product from the fires ... what a can !.
Another gain is that under the conventional procedure it is impossible to cover a region of 5 square kilometers without risking money and the danger of making a fool of yourself. For this reason these tasks are only achievable by government institutions, temporary projects or large companies dedicated to this topic.
In terms of costs, we know what this costs (a lot of money), the less the coverage is the higher the value per square kilometer. Additionally, in some countries, the National Geographic Institutes or the Security Departments must authorize the flight, so you have to pay extra money to do 10 photographs or 100,000 and of course this adds to the costs
Also in many cases the commitment to deliver the negatives is then included so that they are then sold under the table to the company of the competition or ultimately that the expensive negatives go to a cellar full of cockroaches.
If we consider that under these new methods can be made flights in specific areas, with irregular shapes and especially in small roofs ... without having to plan a flight with aeronautical procedures, nor permission by the photos photos that Google shows for free ... surely It will come cheaper ... at least the flight because the cabinet processes are already almost automated.
3. The bad: precision is not systematized
What smells bad in all this is that everyone focuses on the photos and the digital process of orthorectifying them, but we see little that they talk about densifying the existing triangulation network or in many cases inconsistent. It seems that they only talk about stretching the mosaic of images based on recognized points, butRecognized where?
This is delicate, as the premises do not change with the adoption of new technologies:To lower geodetic network density, lower accuracy of orthorectified products"And it's not that there is not Formally patented proposals For a process like this, although to the extreme of the complication but we do not see results of their Improvement plans.
In the case of the people of Pic'Earth, they stretch the images so that they fit the Google Earth data !!!, we understand that for purposes that the data is not broken, because if they place them where they belong, they can come out As the 30 meters of displacement. The problem then focuses that all the material that these people generate, and that has uploaded to Google Earth, has the same imprecision of the beloved virtual globe (2.50 mts relative, 30 mts absolute, not expressed and without published metadata). And it is not that everything is wrong, is that any technical process that wants to be sustained must be systematized.
4. The ugly: Change is taking resistance by the connoisseurs and madness by neogeographers.
Let's face it, when we were told that we were not going to use those
mirrors with the negatives of the photographs that we projected on the plate to burn the orthophoto, we did not like it because we believed that a computer program with its mathematical methods did not have the criterion to distinguish the shadows from the spots of the mirror. The story is the same, now what is happening is the semi-automation of the capture process ... just as the previous process will exchange quality for time.
At that time, we get complicated with the "precision" of the final product knowing that they continue to be models of reality. So we have the "neogeographers" on the one hand with their PDA in hand and on the other end we with the total stations; it is necessary that we have the opening, because our hybrid processes will inevitably have to be replaced by the simplified ones, even their teams will sooner or later achieve greater precision and they will do it for less money ... third, fifth and sixth Premises of Catastro 2014
The best thing will be that our surveying schools do not become outdated in the use of new technologies, and that they do not stop teaching the principles that sustain their use. In the end, the coffee cup will taste the same ... a curtain.
5 The conclusion: Relevance defines the precisions and these require the method
We go back to what We said before, the relevance of the data defines that there are no good or bad maps, only facts. The job of the data provider is to provide facts, with conditions of precision, tolerance and relevance. The one that raises the boundary says "I went, I saw, I measured and this is what I got ... with this method" while the one that gives the orthophoto says "I flew, or I did not fly, I took pictures, I took control points and this is what I got ... with this method ... "
Ortofotos in real time? Is possible, finally the method defines the precision ... and if the pertinence is clear ... it does not matter that while the airplane was flying we were playing in tweeter.