BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles The Emergence of Virtual Service Oriented Grids

The Emergence of Virtual Service Oriented Grids

Bookmarks

In the ever changing world of development and IT, understanding the possibilities new paradigms present and their capacity to change the way business is done is imperative. In much the same way the Internet changed business forever, virtual service oriented grid computing has the potential to force change once again. To better understand how this can be possible, one must understand how virtualization and service orientation work; and how these technologies, used in conjunction with grid computing, complement one another. Much of the groundwork in this field has already been laid, it is just a matter of encouraging growth in this direction.

Other People’s Money versus Other People’s Systems

Scaling a business often involves OPM (other people’s money), through partnerships or issuing of stock through IPOs (initial public offerings). These relationships are carried out within a legal framework that took hundreds of years to develop.

In the real world, scaling a computing system follows a similar approach, in the form of resource outsourcing, such as using other people’s systems or OPS. The use of OPS has a strong economic incentive: it does not make sense to spend millions of dollars in a large system for occasional use only.

Even large scale projects that require significant amounts of computing power usually start with small trial or development runs, with large runs far and few between. Because of the data gathering cycle, some projects reserve the largest and most spectacular runs for the end. A large system that lies idle during development would be a waste of capital.

The use of OPS is facilitated by the other two technologies covered in the book The Business Value of Virtual Service-Oriented Grids (published by Intel Press): virtualization and service orientation. Virtualization makes the sharing of a physical resource practical, whereas the application of service oriented principles facilitates the reuse of these resources.

The sharing of idle workstations that motivated research projects such as Condor1 at the University of Wisconsin in the early 1990s constitutes an early experiment in OPS.

On a macroeconomic scale there is a powerful economic motivation of OPS. A large grid system represents an investment of not tens of millions of dollars but hundreds of millions once the physical and staffing infrastructure are added. The hurdle is not as high as it might seem at first glance. Investment in resources this large are usually pooled, much in the same way an airline system, a shipping line, or even a skyscraper use shared or pooled resources. Hence change does not happen until industry momentum is behind it. A powerful motivator arises when entrepreneurs realize that a viable business is behind the technology.

Within an economic ecosystem, a group of people with skills in this area may decide to form a company that provides a grid service much more efficiently than it is possible at a departmental grid where staff tending it may have other jobs, lowering the cost overall to society for providing grid services.

Unfortunately, the full fledged use of OPS, where computing resources are traded like commodities in a vibrant and dynamic ecosystem, is not a reality today. Such infrastructure requires a sophisticated technical and legal infrastructure not yet available. This infrastructure is needed to handle service level agreements (SLAs), privacy, ensuring that intellectual property (IP) and trade secrets do not leak from the system, as well as user, system, and performance management, billing, and other administrative procedures.

The state of the grid today is quite primitive, similar to the state of commercial practices in the Europe of the seventeenth century, compared to the sophisticated trading markets and financial instruments that exist today. After all, grid technology is less than 20 years old as a distinct discipline. The good news is that progress is happening a lot faster. The authors estimate that grids will come of age in less than 20 years. At that point grids will be interwoven with the fabric of society to the point that they may actually lose the distinct identity they carry today.

Fortunately, there is no need to wait 20 years. There will be a continuum of progress with an increasing portfolio of applications as the infrastructure evolves. Some industries will adopt grids faster than others. Eventually society as a whole will benefit.

As an example, storage in data company-hosted data centers may become quaintly anachronistic. Storage will become a commodity, purchased by the terabyte, petabyte, or exabyte, depending of the most popular unit of the time with quality of service (QoS) defined by SLA. Storage brokers may also be storage providers holding data on their premises. For large accounts they could be pure play brokers, placing their data through a combination of storage providers according to complex statistical allocations that minimize their cost yet meet SLAs promised to customers. This is analogous to the reinsurance systems in use today by large insurance companies to manage risks and redistribute liabilities.

One basic assumption in the material presented is the notion of gradual adoption of grid technology within enterprise computing. This convergence will create excellent opportunities to generate stockholder value. Planning and charting a strategic path will allow organizations realize the benefits of this value.

Virtualization, Service Orientation, and Grids

One reason behind the increasing adoption of grids in the enterprise is the synergy with two powerful technology trends, namely, virtualization and service orientation. Let’s explore each of these technologies and how they relate to each other.

About Grids

The origin of the term <grid as in grid computing is shrouded in mystery and ambiguity. Because of the association of grids with utility computing and the analogy of utility computing with electrical power systems, it is likely that the term grid was coined to capture the concept of an electrical power grid, but applied to computer systems.

An electrical power system consists of a number of transmission lines ending up in bus bars. Bus bars may have generators, that is, power sources attached to them, or they may carry electrical loads. The generic name for a bus bar is a node. The aggregation of transmission lines and nodes forms a network or mesh, albeit a very irregular and sparse mesh. The set of reference lines in a map is also called a grid.

An electric power distribution system within a city is similar in structure to a transmission system that spans a state or even a country. The difference is that the system is much more interconnected because links run for every street, as opposed to being used as intercity ties.

Following on this analogy, a grid computing system consists of a set of computers in a network as illustrated in Figure 1. The computers in a grid are complete, functioning computers capable of working standalone. The network is understood to be a standards-based network, such as an Ethernet-based network or the public Internet.

A cluster is a specialized kind of grid where the nodes are usually identical and co-located in the same room or building. The network may be a low-latency, high-bandwidth proprietary network.

This definition of grid is recursive. For instance a cluster within a grid may be represented as a single node.

Figure 1 - Structure of a Computing Grid

The development of grid technology started in the early 1990s as an alternative to running high performance applications on specialized supercomputers costing millions of dollars. Instead, grids allowed the use of less expensive workstations costing in the order of tens of thousands of dollars. As commodity PCs became more powerful, PCs gradually replaced workstations as preferred grid nodes.

High performance computing applications are run in parallel: multiple nodes or computers are applied to the solution of one problem with the goal of reducing the time to solution as much as possible. The computational complexity and the size of data sets are so large that even if they were solvable in one node, that run might take days, if not weeks or more.

We will refer to grids in a fairly generic way. Some writers refer to the subject as the GRID for GRID computing, which is not quite correct because the word grid is not an acronym. Likewise, some authors refer to grids as the Grid as if it were a single all-knowing, all-powerful Grid in the world in the Orwellian Big Brother sense. Actually the situation is exactly the opposite: grid computing deals with the challenge of managing distributed and federated resources, which is not too different from the proverbial task of herding cats. The rewards of successfully executing a grid strategy are many in terms of efficient use of capital and attaining business agility. A useful first step toward these goals is to demystify the concepts and the technology behind them.

Grids impose restrictions on the type of problems that can be solved. The nodes in a supercomputer can exchange information very fast and at high data rates: they have communication channels that exhibit low latency and high data bandwidth. A computer node running a parallel program requires intermediate results from other nodes in varying amounts depending on the type of application. If communication across nodes does not happen fast enough, the progress of the computation overall is impaired. The delay increases as more nodes are thrown into the computation. There is a point of diminishing returns on the number of the nodes that can be used effectively for a certain computation. This maximum number of nodes defines the limit for scalability for a specific system architecture. The scalability limits for grids are smaller than for specialized supercomputers. For certain applications, these supercomputers may be able to support efficient runs employing hundreds of nodes where scalability may hit a wall after just a few nodes.

A computing grid, reduced to its barest essentials, is a set of computing resources connected by a network. The computing resources comprise processing and storage capabilities. Networks are necessary because the computing resources are assumed to be distributed, within a room, across buildings, or even across cities and continents. The networks allow data to move across processing elements and between processing elements and storage.

Distribution introduces complexity. A single computer in one room would be easier to program and use than 1,000 smaller computers collectively possessing the same processing and storage capability, but scattered across seven continents. Obviously, distribution is not accidental; there must be powerful reasons for it to manifest itself or otherwise it would not happen. The reasons are economic, which ultimately reflect physical limitations of how powerful a single node can be.

When it comes to pushing the envelope for performance, building a single, powerful computer becomes more expensive than a group of smaller computers whose collective performance equals the performance of the single computer. Pushing technology to the limits to increase the performance of a single processor eventually leads to a cost wall requiring expensive tricks for small performance gains.

Under these conditions it becomes cheaper to attain additional performance through the use of replication. This dynamic takes place in multiple contexts. For instance, in 2004 Intel found that a successor to the Pentium® 4 processor, codenamed Prescott would have hit a “thermal wall.” Because of the increase in the number of transistors in the chip and the need to operate the single processor at a higher frequency, the successor chip would have run too hot and consumed too much power at a time when power consumption was becoming an important factor for consumer acceptance. This realization led to the first generation of dual-core processors in 2006 carrying two CPUs in a chip. Each CPU possesses a little less processing capability, but the combined performance of the two cores is larger than the prior generation processor, and the power consumption is also smaller.

Similar economic considerations favor building processors with increasing number of cores as well as computers with multiple processors.

Replication allows overcoming the physical limitations of single processing elements. For instance, if it takes a single core 16 minutes to update 100,000 records, a server with two quad-core CPUs can theoretically do the job in 2 minutes. Four servers applied to the same job could reduce the job to a mere 30 seconds. The eight cores in a server and the four servers are said to be working the problem in parallel.

Unfortunately, parallelism comes with overhead. Even with the four servers the total processing time may end up at 1 minute, not the expected 30 seconds. Why? It’s the computer version of too many cooks in the kitchen. The database may not have a capability to lock records individually during the update. If two or more records are locked during a transaction, access to these records is serialized, and the update of the other records needs to wait until the CPU handling the first record is done. Furthermore, it takes time to move data across distributed computing system, and delays invariably ensue.

Another motivation for replication is to enhance availability. For instance, a grid with 10,000 storage data nodes has potentially 10,000-fold redundancy. The probability of losing data in this environment is practically zero. A 10,000-fold redundancy is likely overkill. In practice some of the replicated resources are used to enhance availability using relatively unreliable nodes and communication networks and some for scalability. Hence the 10,000 node data grid may end up with the storage capacity of 2,500 nodes with quad redundancy, still significantly larger than the storage capacity of a single node but still highly available.

What has taken grids beyond their high performance computing roots is synergy with two other emerging technology trends: service orientation and virtualization.

Grids: A Physical Description

In the previous section we spoke about grid attributes and defined grids as computing nodes connected by a network. In this section we describe how grids are constructed. It may be useful to revisit Figure 1 as we go through the different elements in this section.

There is nothing special about nodes in a grid other than their being connected to a network with the purpose of working collaboratively in some common application. The connecting network needs to use open protocols such as TCP/IP; otherwise connectivity may be difficult to achieve.

Here are some node examples

  • A laptop or desktop PC. Any networked PC can become a member of a grid. A classic example of a grid application using networked PCs is the SETI@home program. It is also an example of cycle scavenging, where the application ran as a screensaver application when the PC’s owner was not working at the machine. In a more formal environment, cycle scavenging is performed using high-end workstations, sometimes in global teams, where users in one country use otherwise unused resources in another country several time zones away.
  • A server in a data center.A drawback of cycle scavenging is that the time to solution may be unpredictable because the target resource may be busy or unavailable when it’s needed the most. In this environment it makes sense to deploy grid nodes as nodes fully dedicated to support grid applications. Furthermore, the nodes need not be departmental. They can be part of a company-wide resource pool. Beyond that, it is not far fetched to think about a grid service provider business model not unlike Web hosting where the service provider does business with multiple corporate clients.
  • Clusters and parallel nodes. The makeup of the nodes in a grid has no architectural restrictions. A single node can be powerful indeed, consisting of a multi-CPU or a computer with multiple core computers or even a cluster. We distinguish a grid from a cluster in that the computers in a cluster (also called nodes) are generally co-located in the same building. The nodes in a cluster may be joined by a specialized network with higher bandwidth and lower latency than most grid nodes. The tradeoff is that these specialized networks may be proprietary or single-sourced. The use of a proprietary protocol in a cluster is not an issue as long as the communication in and out of the cluster is through a standard protocol.
  • Embedded nodes. At the other end of the spectrum, the nodes in a grid can be simple, dedicated computers. These nodes are said to be embedded in an application. Examples of this case are the computers used in networking routers, in wireless access points, and even microprocessor-driven, intelligent electric meters, or as a matter of fact, any utility meter: gas meters, water meters, or cable company customer premises equipment. Another example would be the swarm of networked surveillance and traffic cameras and environmental sensors deployed in a city.

On the network side, the physical medium varies with distance, from InfiniBand or Ethernet in a LAN if the nodes are close by to a Metropolitan Area Network (MAN), a citywide network that connects buildings or a Wide Area Network (WAN) if the nodes span continents or oceans. The communication protocol is almost always TCP/IP.

There is some ambiguity about what constitutes a grid application. Not every distributed application is a grid application. On the other hand, if the application is currently constrained in some way, a grid approach may provide a strong guidance about how the application should evolve. For instance, a compute intensive application such as a finance derivative risk calculation or the solution of heat transfer equations in a data center thermal model might require hours to solve.

Because of performance and software vendor licensing requirements, a common solution environment is to run the application in a single, fast server through a queuing system. As the sophistication of the customer base and the popularity of the application increases, the user community tends to grow and the jobs submitted also grow in complexity and hence take longer to get processed.

If it takes a day or so to get the results of a run, the time to solution is not one day. More often than not there are clerical or technical errors in the run that force a re-submittal. Often a user needs to submit a whole series of parametric runs, each with slight variations in the data, and perhaps these variations are dependent on the results that came back from the prior run.

If it becomes possible to reduce the cycle time for getting results back from 24 hours to one hour the development time can be shortened considerably. Conversely, cycle times longer than what the user community deems reasonable can make users impatient.

A common recourse to shorten cycle times is for the software vendor to provide a multithreaded version of the application that can take advantage of a multi-core CPU or even multiple CPUs in a server.

If the workload overtakes the capability of the most powerful server the next step up is to run the application in a cluster. This can get expensive and the cost hard to justify if the workload is uneven or seasonal. At this point a grid based solution may be the most efficient way to proceed where workloads are offloaded to servers elsewhere in the company, or perhaps even to servers outside the company. This is not an easy feat because the original application might need extensive retooling to run in a grid environment.

Virtualization

Virtualization uses the power inherent in a computer to emulate features beyond the computer’s initial set of capabilities, including emulating complete machines, including machines of a different architecture.

This is mathematically possible because all computers are in essence automatons, to be precise, a special class of automatons called finite state machines: A machine in a specific operating condition or state that executes one instruction will always yield the same result for that instruction. Automatons are deterministic in their behavior: the same stimulus under the same conditions will always elicit the same behavior. A virtualized version of a machine requires billions of state changes and a significant amount of scoreboarding, that is, the host machine simulating a virtualized machine needs to remember the state of the virtualized machine at every step of the simulation. Fortunately, this scoreboarding is maintained by the host computer, and modern computers are good at this task.

Simulating the motions, that is, state changes of the virtualized computer in the host computer, takes more work on the part of the host computer. It takes more cycles in the host computer to simulate one cycle in the virtualized computer plus the overhead of scoreboarding. This overhead can range from 5 percent in the most efficient virtualized environment to orders of magnitude of slowdown. 5 to 15 percent is typical of virtualized server environments where the virtualized and host machines are of the same architecture. If through virtualization a machine that originally had a load factor of 15 percent running one application is now able to host four applications at a load factor of 60 percent, this means that the productivity with virtualization is now four times the original productivity, and hence even 15 percent of overhead is a fair tradeoff for the value generated.

Virtualization is also justifiable in environments where the cost of the hardware is a small fraction of the delivered cost of the application or in data centers or when power or space limitations impose limitations on the number of physical servers that can be deployed in the data center.

One of the first applications of virtualization was virtual memory from research done in the early 1960s. Virtualization allowed machines with limited physical memory to emulate a much larger virtual memory space. This is accomplished by storing data in virtual memory space in some other form of storage, such as a hard drive. Data is swapped in and out of physical memory as it is used. One down side of virtual memory is that it is significantly slower than an equivalent physical memory scheme.

On the other hand, in spite of the slowdown, virtualization presents some undeniable operating advantages. The nodes in a grid are easier to manage if they are all identical. They need to be virtually identical, not physically identical.

Virtualization is playing a role in the current trend toward the separation of data, the applications that manipulate this data and the computers that host the application.

Service Orientation

The notion of service orientation comes from the business world. Any business community or ecosystem is structured as a set of services. For instance, an automobile repair shop service in turn uses services from power utility companies, telecommunications services, accounting and legal services, and so on. A client submitting a work request for an automobile repair uses a highly stylized process (interface) that is very similar to the system by other service providers: any automobile repair shop accepts phone appointments, has a reception desk, and has a billing desk. On the other hand, activities not directly related to the business at hand such as processes for tax payments, need not be exposed.

Services are composable; businesses providing a service use services provided by other business to build theirs through a process of integration or aggregation, as illustrated in Figure 2.

Services can be recursively composable: Intel purchases laptops from original equipment manufacturers that in turn use Intel microprocessors. Intel benefits from the manufacturers’ capability to integrate microprocessors into their products. Recursion is depicted in Figure 2, with S4 supporting S1, but also making use of S1.

The largest and most complex organizations devised by humankind such as federal governments and multibillion global companies rely on thousands upon thousands of services across the globe for their functioning, and providers of these services can be global organizations on their own right.

Figure 2 - Composite Service S1 with Supporting Services S2, S3, S4 and S1 (through Recursion)

It would be difficult to imagine the largest automobile manufacturing companies if these companies had to also build core expertise in iron ore mining for building vehicles and drilling for oil.

Relationships in an economic ecosystem tend to be loosely coupled, sometimes very loosely coupled: once a car is sold, the manufacturer does not handle the refueling of the vehicle. The purchaser relies on a network of providers of energy services. However the energy services need to be there or the cars would not sell. This circumstance is what makes the bar for introducing alternative energy vehicles so high. The market for hydrogen-fueled fuel cell vehicles is very small today not necessarily because of the inherent complexities of the technology. It is also because the network of services that would support this emerging technology is not there yet.

Likewise, it could be argued that the market for pure electric or plug-in hybrid vehicles has not developed because of the range limitations of the available battery technology, making electric powered vehicles “too expensive,” where too expensive means anything above USD 25,000. Yet traditional internal combustion engine vehicles in the range of USD 30,000 to USD 60,000 are not considered extravagant. This is not to say that electric vehicles are held to a different standard. This suggests that pricing may not be the real issue standing in the way of adoption. It is because the foundation services that would support a large market for electric vehicles are not developed: if every parking lot slot had a shock-proof inductive charging station, the relatively small range of these vehicles would be less of a concern.

IT applications and infrastructure expressly designed to function and support this service business world are said to be service oriented. A service oriented architecture, or SOA is any structured combination of services and technologies designed to support the service oriented model. There is no specific date or person associated with this technology trend. Rather, service orientation represents an evolution of preexisting trends, and it’s certainly a product of its times. The adoption of service orientation accelerated considerably after the dot-com crash at the start of the twenty-first century, triggering an existential crisis for IT departments. One of the outcomes was a renewed effort to align IT with business need to underscore the value of technology in supporting business and possibly stave off reductions in force.

Service orientation is a transformative force today to the extent that the largest of the IT shops are retooling their operations behind SOA. The motivation behind this retooling is a significant reduction in operating cost.

Consensus in the industry has been building to align utility computing at least with business applications with the notion of service. A service in the generic sense is an abstraction for a business service. A service is defined by an interface. From an interface perspective, it can be as simple as a credit card purchase authorization that takes an account number and a purchase amount and returns an authorization approval. Or it can be as complex as the processing of a mortgage loan or almost any other business transaction, all the way up in complexity to a corporate merger or transnational agreement.

Services may be available from more than one entity and are discoverable (mechanisms exist to find them). Services are also fungible (an alternate provider can be substituted) and interoperable: If an alternate provider is chosen, the service must work.

Services also provide a binding mechanism or contract that can be sorted out right up to the moment the service is invoked. This binding mechanism can include functional specifications (the specific items exchanged during an invocation or transaction) or nonfunctional specifications, usually items related to quality of service (QoS) or service-level agreements (SLAs).

There are certain similarities between services and the notion of methods, procedures, or subroutines in programming languages. The difference is that services transcend programming languages and apply to business processes or, as a matter of fact, almost any human activity.

The need for the capabilities brought up by the notion of services was voiced by software engineers and language designers as far as the early 1970s. It is only today, during the first years of the twenty-first century, that the state of the art has advanced to allow this dream to be realized. The first incarnation of the notion of services has been attained through Web services as the mechanism for conveying information and the use of XML as a universally understood data format.

In a services environment, a fundamental assumption is that IT systems and processes are aligned and designed to support the notion of services. IT entities that conform to this notion are understood to be service oriented. Finally, IT entities that are put together (architected) to conform to service orientation are understood to follow an SOA.

In theory, every service could be built uniquely from the ground up. This strategy would be horrendously expensive because certain basic and common functions would need to be replicated. One common example would be the employee roster for a company, where one copy of this information would be maintained by human resources to keep track benefits, another by IT telecommunications for the phone directory, with several subsets scattered all over the enterprise. Most information systems started as local efforts and grew into more or less isolated silos. Service-oriented silos would bring no advantages over traditional silos. In fact, we can claim that SOAs bring no extra capabilities that could not be attained through traditional means. The advantage of SOA is mainly economic.

Because services are fungible, they also become reusable: Each time a service is reused, we have an instance of a service that does not need to be re-implemented. Because IT services are designed aligned with business, there will be less of a semantic gap in adapting them to existing and new business uses In a mature SOA environment, very few services will be built from the ground up. Most services will be built out of pre-existing services by composing or compounding already available services.

Applications built this way are said to be compound applications. There is no limit to how deeply this composition process can be implemented within applications, except for possible performance or organizational barriers.

A service in general can essentially be any human activity. IT practitioners deal with a more restricted notion of service. Although a service can trigger a physical activity, such as requesting the visit of a residential appraiser as part of home mortgage filing, a service in the sense of Information Technology needs to have a computerized front end that allows the service to be invoked or summoned by other services. The most commonly used technology to invoke a service is Web services. In this context, and elaborating on the initial discussion in the preface, we call the embodiment or realization of a service a servicelet or microservice.

In other words, a servicelet becomes the building unit for a compound application. It is usually implemented as a self-contained unit of hardware and software. Servicelets are designed to be combined with other servicelets into a compound application using Web services technology. A servicelet may be a service, but not all services can be used as servicelets unless front ended appropriately.

SOA is not free. Functionality cannot be deferred to other services ad infinitum. The buck needs to stop at some point into an application that does the actual work, and this functional piece needs to be implemented at least once. Systems in an SOA environment need to be designed for reuse at additional effort and expense. For instance, making a service discoverable by other organizations usually means maintaining a service repository. This repository could be very informal, in the form of word of mouth or e-mail queries when architects and implementers know each other. Or it could be a formal universal discovery, description, and integration (UDDI) repository accompanied by Web services definition language (WSDL) mechanisms for describing services end points.

Instituting an SOA environment involves trading off extra costs upfront that will impact a project against a long-term common benefit. This is invariably a tough call, and a transformation toward an SOA environment will not happen without a deliberate top-down plan in place.

On the other hand, the strategic benefits SOA brings change the rules of the game. They are associated with organizations at the most advanced stage of the Gartner Infrastructure Maturity Model, where a policy-based environment is the norm, or in the last stage of enterprise architecture as documented by studies at the Massachusetts Institute of Technology (MIT) Sloan Center for Information Systems Research (CISR), which shows that business modularity becomes ingrained in the organization's culture.

In this environment, IT resources can be repositioned swiftly to support almost any strategic and tactical need. An example of a tactical request would be a sales and marketing campaign in response to specific market conditions.

These goals become achievable not necessarily because the data center's physical plant can be grown at will, although an increased capability will become available, but because SOA allows rearranging and reusing existing resources in response to these needs.

IT becomes an instrument for business growth instead of a limiting factor. Discussions about IT budgets center on value—about how much IT will contribute to the business—instead of on a certain request can't be fulfilled because it would cost too much.

Service orientation first became known at the application level, and hence when we hear SOA we usually think of business applications. Actually, the effects are profound at every level of abstraction in a business.

Virtualization + Service orientation + Grids = Virtual Service Oriented Grids

Virtual service oriented grids have developed as a distinct subspecies from the original grids targeted to run high performance computing applications. This evolution broadens the market appeal of the original HPC grids. The original HPC grids continue evolving with emphasis on performance.

There are two distinct flavors of enterprise grids depending on where the hosting hardware is placed, namely data center grids and PC grids. PC grids are the contemporary descendants of the original workstation grids and may still be used as hosts for cycle scavenging. Nodes in PC grids are assigned to specific individuals in an organization. Figure 3 depicts the relationships just described.

Data center grids use server-based, anonymous nodes in the sense that the nodes are not assigned to specific users or even applications. Cycle scavenging in a data center setting is possible but rarely practical. Placing cycle scavenging under a separate administrative structure to sweep up computer cycles not used by the primary application would be a lot of hard work for little return for increasing server utilization. This scheme would be subject to significant political hurdles because of the opposition of the main application owners. It makes more sense to assign workloads to servers in a consistent and comprehensive fashion with all loads sharing an allocation of prioritized server resources instead of making a distinction between a primary workload and a cycle scavenging workload.

It could be argued that because there is a distinct owner for a laptop client, the scavenging model still makes sense. However, because the distinction between client and a server will blur, as will the distinction between “fat” and “thin” clients, the scavenging model for PC clients will also become obsolete. What we are left with is a set of service based applications to be run and a set of pooled hardware hosts to run them. The decision on where to run the services will eventually be made by the operating environment subject to the appropriate policies.

Figure 3 - Grid Genealogy Showing the Evolution the Initial HPC Requirements for Low-cost, Scalable Computing to Grid Computing Today

The analysis of this trend suggests looking at data in a different way. Somehow this convergence is leading to the dissociation of data, the applications that manipulate this data and the computer hosts that run the applications.

In a virtualized grid environment data is no longer bound to a machine, and hence protecting a machine from theft makes no sense. Data becomes disembodied; it is just present when it’s needed and where it’s needed and in a form appropriate to the device used to access it. Data might look to programs and users as a single entity, for instance as a single file. In actuality the system will store, replicate, and migrate the data to multiple devices, yet keeping the illusion of a single logical entity.

For more information about virtual service oriented grids, please refer to the book The Business Value of Virtual Service-Oriented Grids by Enrique Castro-leon, Jackson He, Mark Chang and Parviz.

About the Authors

Enrique Castro-Leon is an enterprise and data center architect and technology strategist for Intel Digital Enterprise Group working in OS design and architecture, software engineering, high-performance computing, platform definition, and business development. His work involves matching emerging technologies with innovative business models in developed and emerging markets. In this capacity he has served as a consultant for corporations large and small, nonprofits and governmental organizations on issues of technology and data center planning.

He has taken roles as visionary, solution architect, project manager, and has undertaken the technical execution in advanced proof of concept projects with corporate end users and inside Intel illustrating the use of emerging technologies. He has published over 40 articles, conference papers and white papers on technology strategy and management as well as SOA and Web services. He holds PhD and MS degrees in Electrical Engineering and Computer Science from Purdue University.

Jackson He is a lead architect in Intel's Digital Enterprise Group, specializing in manageability usages and enterprise solutions. He holds PhD and MBA degrees from the University of Hawaii. Jackson has overall 20 years of IT experience and has worked in many disciplines from teaching, to programming, engineering management, datacenter operations, architecture designs, and industry standard definitions. Jackson was Intel’s representative at OASIS, RosettaNet, and Distributed Management Task Force. He also served on the OASIS Technical Advisory Board from 2002–2004. In recent years, Jackson has focused on enterprise infrastructure manageability and platform energy efficiency in dynamic IT environment. His research interest covers broad topics of virtualization, Web services, and distributed computing. Jackson has published over 20 papers at Intel Technology Journal and IEEE Conferences.

Mark Chang is a principal strategist in the Intel Technology Sales group specializing in Service-Oriented Enterprise and advanced client system business and technology strategies worldwide. Mark has more than 20 years of industry experience including software product development, data center modernization and virtualization, unified messaging service deployment, and wireless services management. He participated in several industry standard organizations to define the standards in CIM virtualization models and related Web services protocols. Additionally, Mark has a strong relationship with the system integration and IT outsourcing community. He holds an MS degree from the University of Texas at Austin.

Parviz Peiravi is a principal architect with Intel Corporation responsible for enterprise worldwide solutions and design; he has been with the company for more than 11 years. He is primarily responsible for designing and driving development of service oriented.


1 Douglas Thain, Todd Tannenbaum, and Miron Livny, Condor and the Grid, in Fran Berman, Anthony J.G. Hey, Geoffrey Fox, editors, Grid Computing: Making The Global Infrastructure a Reality, John Wiley, 2003. ISBN: 0-470-85319-0


This article is based on material found in book The Business Value of Virtual Service-Oriented Grids by Enrique Castro-leon, Jackson He, Mark Chang and Parviz Peiravi. Visit the Intel Press web site to learn more about this book: www.intel.com/intelpress/sum_grid.htm.

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Publisher, Intel Press, Intel Corporation, 2111 NE 25 Avenue, JF3-330, Hillsboro, OR 97124-5961. E-mail: intelpress@intel.com.

Rate this Article

Adoption
Style

BT