Business Angle: Cloud Computing, Part 1
Professional Surveyor Magazine -
October 2012

Cloud computing is all the rage at the moment. In fact, it seems that many companies are being caught up in a herd mentality of near panic: “We’re not into cloud computing. We’re getting behind our competitors. We have to do something fast!!” Often this feeling of being technologically behind is not founded in basic business principles. People should be asking, “How does this technology fit into what I am trying to accomplish with my business?”
Here I cover the basics of cloud computing, how to assess its applicability to your business, and practical ways that it might be used to your business advantage. In our industry (fundamental processing of sensor data), the common perspective on cloud computing is that of companies involved in base map data acquisition, such as aerial imagery and kinematic lidar. Thus, the focus of this article is on large-data-volume processing environments. However, the general information is applicable to all businesses. Part one covers definitions and virtualization; part two covers practicality and the pros and cons of using a public cloud.
What Is Cloud Computing
Cloud computing is, basically, a configuration of computing resources, both hardware and software, that “elastically” and automatically reconfigures to accommodate a varying processing load. This cloud may be in your own computer room (a private cloud) or supplied by an outside provider such as Amazon or Microsoft.
The
National Institute of Standards and Technology (NIST, within the Department of Commerce) provides a detailed, formal definition. From them:
Five essential characteristics
- On-demand self-service: you can provision your own service needs automatically.
- Broad network access: services are available via standard devices and connectivity models, such as over the internet from your desktop.
- Resource pooling: the provider is servicing many clients from a computer center.
- Rapid elasticity: the amount of resources you use can automatically increase and decrease, tracking your needs.
- Measured service: NIST is not clear on this one. It might mean that the model can support a pay-by-unit usage system.
Three service models
- Software as a Service (SaaS): you simply run an application that is remotely hosted via a connection (typically a web browser). Salesforce.com is a good example.
- Platform as a Service (PaaS): you deploy your software applications on a hardware and software foundation provided by the cloud service. Thus, the service might provide an elastic model of the operating system (OS), a database system (e.g. Oracle, SQL Server), development libraries, and so forth.
- Infrastructure as a Service (IaaS): this is an “on the metal” model where you are essentially renting hardware in a unitized fashion. You are providing everything, including the operating system.
Four deployment models
- Private cloud: the cloud is exclusively dedicated to a single customer. Thus, it may simply be the computer center of an organization serving numerous business units in a way that it meets the cloud characteristics.
- Community cloud: here the cloud serves a collection of users who share a common foundation need. An example is a rating system shared by insurance companies through a professional organization.
- Public cloud: the cloud is provisioned for general public use. Salesforce.com and other hosted software services fit in this category.
- Hybrid cloud: NIST defines “hybrid” as connectivity between two or more clouds. They do not provide a concrete example.
I think that NIST missed one of the most important models: what I call “System as a Service” (I’m not sure what acronym I can use because SaaS is taken). The most prominent example is the combination of the iPod (and all other “i” devices), iTunes, and iCloud. All of these components work together to form a seamless integration of a purpose-built hardware device with a back-end service that goes far beyond storage and compute resources. (For example, iCloud will perform “matching” on your song library, substituting a higher quality version). An example more related to our field is the Trimble Gatewing sUAS (small unmanned aerial system) and cloud-based imager processing solution. This aspect of cloud computing may emerge as the most valuable to our industry.
Virtualization
It is difficult to talk about cloud computing without talking about virtualization. Virtual means that something appears to exist as a physical thing, but, in fact, the physical thing is being emulated in some way. That is not a very clear description, but keep reading.
An early use of computer virtualization was to allow a single physical computer to host software that required different types of operating systems. Rather than hosting the OS directly on the hardware, a software level was created that emulated the underlying hardware. By running two of these side by side on a single physical computer (the hardware host), it was possible to host the two disparate operating systems.
An early example is the “Hypervisor” shipped by IBM in 1965 with the IBM 360 that allowed the 360, in a time-sharing mode, to host both 360 software (native mode) and 7080 software (emulation mode). Hypervisor is now the generic term used for this hardware/software abstraction.
Virtualization existed in one form or another throughout the eras of mainframe, departmental (e.g. VAX), the first UNIX wave (mid-1980s), the PC days (current), and the second UNIX wave (LINUX, etc.). It was usually very specific in nature and aimed at a particular problem. VMware Inc., founded in 1998, most likely made hypervisor (virtualization) software a software category independent of any particular hardware problem.
VMware Workstation was the seminal product from
VMware. It was aimed at the desire to run multiple operating systems on the same physical workstation. For example, a developer might want to switch between uses of LINUX and Windows.
VMware rapidly moved to the server domain with VMware Server (today there is a complex family of workstation and server products from VMware, but they all are aimed at solving the same fundamental problem). VMware Server allows multiple operating systems to be hosted on a server-class machine. One of the big focus areas for VMware was an improvement in reliability for server centers. To accomplish this, VMware Server was made dynamically rehostable. This means that a VMware Server can be paused, a copy of the entire stack of software running on the virtual server is made, and this copy is then moved to a different VMware Server and restarted. This sort of dynamic movement is a core requirement for an elastic cloud environment.
Benefits
In our industry, one of the main uses of virtualization is to host multiple copies of an operating system on the same physical server. This provides three major benefits:
Optimization of hardware resources. It is quite often the case that a server is underused (in fact, servers dedicated to hosting a single instance of an OS typically run at 20% of hardware capacity). By “virtualizing” the server using software such as VMware, multiple operating systems (“guests”) can run on the same physical server. This saves not only on hardware costs but also on space, power, and HVAC requirements.
Isolation of software. You might ask, “Why not just host the native applications within the same OS?” The reason is related to reliability (or actually, the lack of reliability). An OS running on a Virtual Machine (VM) can be “rebooted” without disturbing other OSes running on that same physical hardware. Additionally, a software program running amuck within one OS on a VM will not cause problems with other OSes running on the same hardware.
The ability to resize the apparent hardware dedicated to an application. Each VM is configured with allocations of processors, memory, I/O bandwidth, and other fundamental hardware attributes. Increasing or decreasing these resources when hosting in a virtualized mode means simply changing some parameters associated with the VM on which the application is hosted.
Of course, you cannot “grow” the instance beyond the physical capacity of the underlying machine, but you can offload other VMs to separate physical machines or move the VM that needs to grow to a new physical machine without the need to rebuild the application stack. You will notice that this is the “elasticity” specified in the NIST description of cloud computing.
Considerations
Virtualization does not come without a cost, literally. It is useful to think of each instance of VM running on a particular server as a physical server itself. This is because the host OS is not part of the VM and must be separately purchased. Thus, if you are running Windows Server 2008 as your OS and you wish to run four VMs on one physical server, you will need to purchase four copies of Windows Server 2008. Microsoft and other vendors have begun to package software (such as operating systems and database deployments) in bundles aimed at virtualization and hence provide “scalable” pricing.
The second issue in the use of VM software is performance. Some extent of the operation of a VM must occur in emulation mode (as opposed to native processor mode). This means the VM emulates some function provided by the underlying hardware. The penalty can range from a very efficient 94% (meaning that your Windows Server 2003 is running at 94% of the speed you would realize on native hardware) all the way down to the sub-50% range.
Part two of this series covers the practicality of image/lidar processing in the cloud as well as the pros and cons of using a public cloud.
Lewis Graham is president and chief technical officer of GeoCue Corporation, North American’s largest provider of products, training and consulting services for airborne and mobile lidar applications. He founded Z/I Imaging Corporation in 1998 by merging the photogrammetry unit of Intergraph with a similar business unit of Carl Zeiss, Germany.
» Back to our October 2012 Issue