Those of us who are users of graphic applications, we are always expectant that the computers have sufficient working memory. In this, the CAD / GIS programs have always been questioned or measured according to the time it takes to perform daily activities such as:
- Spatial analysis
- Rectification and recording of images
- Deployment of bulk data
- Data management within a geodatabase
- Data service
The traditional PC has not changed much in recent years, in terms of RAM, hard disk, graphics memory and features that have only been increasing; But the CPU operation logic has maintained its original design (That's why we keep calling it CPU). It has also been a disadvantage that as teams grow in capacity, programs kill their expectation by designing to consume the new potential.
As an example, (And only example), when two users are placed at the same time, in the same conditions of equipment and data, one with AutoCAD 2010 and another with Microstation V8i, loading 14 raster images, a parcel file of 8,000 properties and connection to an Oracle spatial database, we ask ourselves the question:
What does one of the two have, so as not to collapse the machine?
The answer is not in innovation, it's simply the way the program develops, because it does not happen with AutoDesk Maya, which does more crazy things and performs better. The way of exploiting the PC is the same (so far in the case of the two programs), and as a function of this we shoot the programs, because we occupy them to work, and much. Thus, some computers are known as traditional PCs, workstations or servers; not because they are of another color, but because of the way they render executing high consumption programs in graphic design, video processing, application development, server functions and in our case, operation with spatial data.
Less CPU, more GPU
Of the most outstanding in recent changes to the architecture of PCs is the term coined as GPU (Graphics Process Unit), which allows to find a better performance of the equipment, converting large routines into small simultaneous tasks, without going through the administration Of the CPU (Central Process Unit), whose working capacity is played between the revolutions of the hard disk, RAM memory, video memory and among other particulars (Not many others).
The graphics cards are not made to increase video memory, but they themselves include a processor that contains hundreds of cores designed to run parallel processes. They have always had this (about), but the current advantage is that these manufacturers offer some open architecture (almost) so that software developers can consider the existence of a card of these capabilities and exploit its potential. PC Magazine of this month of January mentions companies such as nVidia, ATI and others included in the alliance OpenCL
To understand the difference between CPU and GPU, here I mean a simile:
CPU, all centralizedIt is like a municipality with everything centralized, that has an urban planning, knows that it must control its growth but is unable to supervise even the new constructions that are violating the norms. But instead of concessioning this service to the private company, insists on taking the role, the population does not know who to complain about the neighbor who is taking the sidewalk, and the city continues to get disordered every day.
Sorry, I was not talking about your mayor, I was just talking about a CPU simile, where this Central Process Unit (in case of Windows) should make the team perform in processes like:
- Programs that run when Windows starts, such as Skype, Yahoo Messenger, Antivirus, Java Engine, etc. All consuming a part of the memory of work with a low priority but of unnecessary form unless they are modified by the msconfig (something that some ignore).
- Services that are running, that are part of Windows, commonly used programs, connected hardware or others that were uninstalled but are left running. These usually have a medium / high priority.
- Programs in use, which consume space with high priority. Its execution speed is felt in the liver because we curse if they do not do it fast despite having a high performance team.
And although Windows does its juggling, practices like having many programs open, installing or uninstalling irresponsibly, unnecessary issues that come Pintones, Make us ourselves are guilty of the malfunction of the team.
It happens then, that when we start a process of the mentioned ones at the beginning, the processor is Coconut breaks looking to prioritize this over the other programs in use. His few options to optimize are RAM memory, video memory (which is often shared), if there is a graphics card to get something, depending on the type of hard drive and other small things, the plaintive moan could be less.
GPU, parallel processes, It is like the municipality decides to decentralize, concession or privatize those things that are out of reach, even if they are large processes are delivered in small tasks. Thus, based on the current regulations, a private company is given the role of monitoring the punishable violations in a specific way. As a result (Just example), The citizen can fulfill that delightful pleasure of telling the ribs to the neighbor that takes the dog to
Shit on its sidewalk, which builds a wall taking part of the sidewalk, parking your car improperly, etc. The company answers the call, goes to the place, processes the action, takes it to the court, executes the fine, half goes to the municipality, the other is a profitable business.
This is how the GPU works, the programs can be designed so that they do not send massive processes in a conventional way, but they go in parallel like small filtered routines. Oh! wonderful!
So far, not many programs are making their applications with these features. The majority, they play they aspire to arrive at 64 bits to solve their problems of slowness, although we all know that don Bill Gates always goes to walk in those capacities loading unnecessary things on the next versions of Windows. The Windows strategy includes leveraging the GPU through APIs designed to work with DirectX 11, which is sure to be an alternative that all (or most) will accept because they will prefer it as a standard instead of doing crazy things for each brand outside of OpenCL.
The graphic shows an example, where it is shown as between 2003 and 2008 the nVidia processor via GPU has been revolutionizing its capabilities compared to the Intel CPU. Also the Smoked explanation Of the difference.
But the potential of the GPU is there, hopefully and the CAD / GIS programs take out the necessary juice. It has already been heard, although the most outstanding case is the
e Manifold GIS, with the CUDA cards, from nVidia, in which a digital terrain model generation process that took more than 6 minutes was executed in just 11 seconds, taking advantage of the existence of a CUDA card. Smoked that made them To win the Geotech 2008.
In conclusion: We go for the GPU, we will surely see a lot in the next two years.