Popular in Course
verified elite notetaker
Popular in Business
This 110 page Document was uploaded by an elite notetaker on Monday December 21, 2015. The Document belongs to a course at a university taught by a professor in Fall. Since its upload, it has received 8 views.
Reviews for Agent-Technology--Computing-as-Interaction
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 12/21/15
Agent Technology: Computing as Interaction A Roadmap for Agent Based Computing Technologies Trends and Drivers Related Disciplines Economics Related Techniques Grid Computin g Organisations Trust and Reputation Mathematical Self-* Modelling Systems Game Theory Complex Systems Logic Philosophy Logic Programming Biology Ambient Intelligence Coordination Negotiation Communication Anthropology Uncertainty in AI Reasoning User Sociology and Learning Interaction Design Peer-to-Peer Service Oriented Computing Computing Robotics Formal Programming Software Interoperability Infrastructure Methods Languages Engineering Artificial Life Organisation Design Semantic Web Political Science Marketing Simulation Decision Theory Compiled, written and edited by Michael Luck, Peter McBurney, Onn Shehory, Steve Willmott and the AgentLink Community Supported by "▯%▯%▯&▯%▯▯▯7▯"▯-▯6▯&▯▯▯4▯0▯-▯6▯5▯*▯0▯/▯4 Michael Luck, Peter McBurney, Onn Shehory and Steven Willmott © September 2005, AgentLink III ISBN 085432 845 9 This roadmap has been prepared as part of the activities of AgentLink III, the European Coordination Action for Agent-Based Computing (IST-FP6-002006CA). It is a collaborative effort, involving numerous contributors listed at the end of the report. We are grateful to all who contributed, including those not named. Neither the editors, authors, contributors, nor reviewers accept any responsibility for loss or damage arising from the use of information contained in this report. Special thanks go to Catherine Atherton, Roxana Belecheanu, Rebecca Earl, Adele Maggs, Steve Munroe and Serena Rafﬁn, who all contributed in essential ways to the production of this document. The cover was produced by Serena Rafﬁn, based on an original design by Magdalena Koralewska. Editors: Michael Luck School of Electronics and Computer Science University of Southampton Southampton SO17 1BJ United Kingdom firstname.lastname@example.org Peter McBurney Department of Computer Science University of Liverpool Liverpool L69 3BX United Kingdom email@example.com Onn Shehory IBM - Haifa Research Labs Haifa University Mount Carmel, Haifa 31905 Israel firstname.lastname@example.org Steven Willmott Departament Llenguatges i Sistemes Informàtics Universitat Politècnica de Catalunya Jordi-Girona 1-3E-08034 Barcelona, Spain email@example.com The corresponding editor of this document is Michael Luck. 2 AgentLink Roadmap AgentLink III AgentLink III is an Information Society Technologies (IST) Coordination Action for Agent- Based Computing, funded under the European Commission’s Sixth Framework Programme (FP6), running through 2004 and 2005. Agent-based systems are one of the most vibrant and important areas of research and development to have emerged in information technology in recent years, underpinning many aspects of broader information society technologies. The long-term goal of AgentLink is to put Europe at the leading edge of international competitiveness in this increasingly important area. AgentLink is working towards this by seeking to achieve the following objectives. ▯ To gain competitive advantage for European industry by promoting and raising awareness of agent systems technology. ▯ To support standardisation of agent technologies and promote interoperability. ▯ To facilitate improvement in the quality, proﬁle, and industrial relevance of European research in the area of agent-based computer systems, and draw in relevant prior work from related areas and disciplines. ▯ To support student integration into the agent community and to promote excellence in teaching in the area of agent-based systems. ▯ To provide a widely known, high-quality European forum in which current issues, prob- lems, and solutions in the research, development and deployment of agent-based computer systems may be debated, discussed, and resolved. ▯ To identify areas of critical importance in agent technology for the broader IST com- munity, and to focus work in agent systems and deployment in these areas. Further information about AgentLink III, and its activities, is available from the AgentLink website at www.agentlink.org In trying to raise awareness and to promote take-up of agent technology, there is a need to inform the various audiences of the current state-of-the-art and to postulate the likely future directions the technology and the ﬁeld will take. This is needed if commercial organisations are to best target their investments in the technology and its deployment, and also for policy makers to identify and support areas of particular importance. More broadly, presenting a coherent vision of the development of the ﬁeld, its application areas and likely barriers to adoption of the technology is important for all stakeholders. AgentLink is undertaking this technology roadmapping study in order to develop just such a strategy for agent research and development. 3 4 AgentLink Roadmap Contents Executive Summary 7 1 What is Agent Technology? 11 1.1 Agents as Design Metaphor 11 1.2 Agents as a Source of Technologies 12 1.3 Agents as Simulation 12 2 Technological Context 15 3 Emerging Trends and Critical Drivers 19 3.1 Semantic Web 19 3.2 Web Services and Service Oriented Computing 19 3.3 Peer-to-Peer Computing 20 3.4 Grid Computing 21 3.5 Ambient Intelligence 23 3.6 Self-* Systems and Autonomic Computing 24 3.7 Complex Systems 25 3.8 Summary 26 4 Agent Technologies, Tools and Techniques 29 4.1 Organisation Level 30 4.1.1 Organisations 30 4.1.2 Complex Systems and Self Organisation 30 4.1.3 Trust and Reputation 32 4.2 Interaction Level 33 4.2.1 Coordination 33 4.2.2 Negotiation 34 4.2.3 Communication 35 4.3 Agent Level 35 4.4 Infrastructure and Supporting Technologies 36 4.4.1 Interoperability 37 4.4.2 Agent Oriented Software Engineering 37 4.4.3 Agent Programming Languages 39 4.4.4 Formal Methods 40 4.4.5 Simulation 41 4.4.6 User Interaction Design 42 5 5 Adoption of Agent Technologies 43 5.1 Diffusion of Innovations 43 5.2 Product Life Cycles 43 5.3 Standards and Adoption 46 5.4 Agent Technologies 47 5.5 Modelling Diffusion of Agent Technologies 51 5.5.1 Model Design 51 5.5.2 Simulation Results 52 5.6 Activity in Europe 53 6 Market and Deployment Analysis 57 6.1 Deliberative Delphi Survey 57 6.1.1 Industry Sector Penetration 57 6.1.2 Deployment of Agent Technologies 59 6.1.3 Technology Areas and Maturity 60 6.1.4 Standards 63 6.1.5 Prospects 63 6.2 The Agent Technology Hype Cycle 65 6.2.1 The Gartner Analysis 66 6.2.2 The AgentLink Analysis 67 7 Technology Roadmap 71 7.1 Phase 1: Current 71 7.2 Phase 2: Short-Term Future 72 7.3 Phase 3: Medium-Term Future 72 7.4 Phase 4: Long-Term Future 73 7.5 Technologies and Timescales 74 8 Challenges 77 8.1 Broad Challenges 77 8.2 Speciﬁc Challenges 78 8.3 Recommendations 83 9 Conclusions 85 References 87 Glossary 91 Web Resources and URLs 93 Methodology 95 AgentLink Members 97 Acknowledgements and Information Sources 103 6 AgentLink Roadmap Executive Summary In its brief history, computing has enjoyed several different metaphors for the notion of computation. From the time of Charles Babbage in the nineteenth century until the mid- 1960s, most people thought of computation as calculation, or operations undertaken on numbers. With widespread digital storage and manipulation of non-numerical information from the 1960s onwards, computation was re-conceptualised more generally as information processing, or operations on text, audio or video data. With the growth of the Internet and the World Wide Web over the last ﬁfteen years, we have reached a position where a new metaphor for computation is required: computation as interaction. In this metaphor, computing is something that happens by and through communication between computational entities. In the current radical reconceptualisation of computing, the network is the computer, to coin a phrase. In this new metaphor, computing is an activity that is inherently social, rather than solitary, leading to new ways of conceiving, designing, developing and managing computational systems. One example of the inﬂuence of this viewpoint is the emerging model of software asaservice,forexampleinservice-orientedarchitectures.Inthismodel,applicationsareno longer monolithic, functioning on one machine (for single user applications), or distributed applications managed by a single organisation (such as today’s Intranet applications), but instead are societies of components. ▯ These components are viewed as providing services to one another. They may not all have been designed together or even by the same software development team; they may be created, operate and be decommissioned according to different times- cales; they may enter and leave different societies at different times and for different reasons; and they may form coalitions or virtual organisations with one another to achieve particular temporary objectives. Examples are automated procurement sys- tems comprising all the companies connected along a supply chain, or service crea- tion and service delivery platforms for dynamic provision of value-added telecommu- nications services. ▯ The components and their services may be owned and managed by different organi- sations, and thus have access to different information sources, have different objec- tives, and have conﬂicting preferences. Health care management systems spanning multiple hospitals or automated resource allocation systems, such as Grid systems, are examples here. 7 Executive Summary ▯ The components are not necessarily activated by human users but may also carry out actions in an automated and coordinated manner when certain conditions hold. These preconditions may themselves be distributed across components, so that action by one component requires prior co-ordination and agreement with other compo- nents. Simple multi-party database commit protocols are examples, but signiﬁcantly more complex coordination and negotiation protocols have been studied and de- ployed, for example in utility computing systems and ad hoc wireless networks. ▯ Intelligent, automated components may even undertake self-assembly of software and systems, to enable adaptation or response to changing external or internal cir- cumstances. An example of this is the creation of on-the-ﬂy coalitions in automated supply-chain systems in order to exploit dynamic commercial opportunities. Such sys- tems resemble those of the natural world and human societies much more than they do the example arithmetic programs taught in Fortran classes, so ideas from biology, statistical physics, sociology and economics play an increasingly important role in computing systems. How should we exploit this new metaphor of computing as social activity, as interaction between independent and sometimes intelligent entities, adapting and co-evolving with one another? The answer, many people believe, lies with agent technologies. An agent is a computer program capable of ﬂexible and autonomous action in a dynamic environment, usually an environment containing other agents. In this abstraction, we have encapsulated autonomous and intelligent software entities, called agents, and we have demarcated the society in which they operate, a multi-agent system. Agent-based computing concerns the theoretical and practical working through of the details of this simple two-level abstraction. In the sense that it is a new paradigm, agent-based computing is disruptive. As outlined above, it causes a re-evaluation of the very nature of computing, computation and computational systems, through concepts such as autonomy, coalitions and ecosystems, which make no sense to earlier paradigms. Economic historians have witnessed such disruption with new technologies repeatedly, as new technologies are created, are adopted, and then mature. A model of the life-cycle of such technologies, developed by Perez (2002), and reproduced in Figure 0.1, suggests two major parts: an installation period of exploration and development; and a deployment period concentrating on the use of the technology. As will be argued later in this document, agent technologies are still in the early stages of adoption, the stage calledruption in this life-cycle. In the chapters that follow, we examine the current status of agent technologies and compare their market diffusion to related innovations, such as object technologies. We also consider the challenges facing continued growth and adoption of agent technologies. 8 AgentLink Roadmap This document is a strategic roadmap for agent-based computing over the next decade. It has been prepared by AgentLink III, a European Commission-funded coordination action, intended to support and facilitate European research and development in agent technologies. The contents of the roadmap are the result of an extensive, eighteen-month effort of consultation and dialogue with experts in agent technology from the 192 member organisations of AgentLink III, in addition to experts in the Americas, Japan and Australasia. The roadmap presents our views of how the technology will likely develop over the decade to 2015, the key research and development issues involved in this development, and the challenges that currently confront research, development and further adoption of agent technologies. This strategic technology roadmap is not intended as a prediction of the future. Instead, it is a reasoned analysis: given an analysis of the recent past and current state of agent technologies, and of computing more generally, we present one possible future development path for the technology. By doing this, we aim to identify the challenges and obstacles that will need to be overcome for progress to be made in research and ▯▯▯▯▯▯▯▯▯▯!▯▯%▯ ▯$▯"▯▯▯▯%▯▯▯!"▯"▯'▯▯)▯▯ ▯▯ ▯▯▯▯▯ ▯▯▯▯▯▯▯▯▯▯▯▯▯ ▯▯ ▯▯▯▯▯▯▯▯ ▯ ▯ ▯▯▯▯▯▯ ▯ ▯▯▯▯▯ ▯ ▯▯▯▯▯▯▯▯▯▯▯▯▯▯▯▯▯▯▯ ▯▯▯▯▯▯ ▯▯▯▯▯▯▯▯▯▯ ▯ ▯ ▯▯▯▯▯▯▯▯▯▯▯▯ ▯ ▯▯ ▯ ▯ ▯▯▯▯▯▯▯ ▯▯ ▯▯▯▯▯▯▯ ▯▯▯▯▯▯▯▯▯▯▯▯▯▯▯▯ ▯▯▯▯▯ ▯ ▯ "▯▯▯!▯▯▯▯▯ ▯▯▯▯▯ ▯▯▯▯▯▯▯▯▯▯▯▯ ▯▯▯▯▯▯▯▯▯ ▯▯▯▯▯▯▯▯ ▯ ▯▯ ▯ ▯▯ ▯▯ ▯ ▯ ▯▯ ▯▯▯▯▯▯▯▯▯▯▯ ▯▯▯▯ ▯▯▯▯ ▯▯▯▯▯▯▯▯▯▯ ▯▯ ▯▯ ▯ ▯"&#▯▯▯▯▯▯#▯"%▯▯▯▯#▯( Figure 0.1: The phases of technology life-cycles. Source: Carlota Perez 9 Executive Summary development,andforgreatercommercialadoptionofthetechnologytooccur.Moreover, by articulating a possible future path and identifying the challenges to be found along that path, we hope to galvanise the attention and efforts both of the agent-based computing community and of the IT community more generally: these challenges and obstacles will only be overcome with concerted efforts by many people. We hope the ideas presented here are provocative, because a strategic roadmap should not be the end of a dialogue, but the beginning. 10 AgentLink Roadmap 1 What is Agent Technology? Agent-based systems are one of the most vibrant and important areas of research and development to have emerged in information technology in the 1990s. Put at its simplest, an agent is a computer system that is capable of ﬂexible autonomous action in dynamic, unpredictable, typically multi-agent domains. In particular, the characteristics of dynamic and open environments in which, for example, heterogeneous systems must interact, span organisational boundaries, and operate effectively within rapidly changing circumstances and with dramatically increasing quantities of available information, suggest that improvements on traditional computing models and paradigms are required. Thus, the need for some degree of autonomy, to enable components to respond dynamically to changing circumstances while trying to achieve over-arching objectives, is seen by many as fundamental. Many observers therefore believe that agents represent the most important new paradigm for software development since object orientation. The concept of an agent has found currency in a diverse range of sub-disciplines of information technology, including computer networks, software engineering, artiﬁcial intelligence, human-computer interaction, distributed and concurrent systems, mobile systems, telematics, computer-supported cooperative work, control systems, decision support, information retrieval and management, and electronic commerce. In practical developments, web services, for example, now offer fundamentally new ways of doing business through a set of standardised tools, and support a service-oriented view of distinct and independent software components interacting to provide valuable functionality. In the context of such developments, agent technologies have increasingly come to the foreground. Because of its horizontal nature, it is likely that the successful adoption of agent technology will have a profound, long-term impact both on the competitiveness and viability of IT industries, and on the way in which future computer systems will be conceptualised and implemented. Agent technologies can be considered from three perspectives, each outlined below, as illustrated in Figure 1.1. 1.1 Agents as Design Metaphor Agents provide software designers and developers with a way of structuring an application around autonomous, communicative components, and lead to the construction of software tools and infrastructure to support the design metaphor. In this sense, they offer a new and often more appropriate route to the development of complex computational systems, especially in open and dynamic environments. In order to support this view of systems development, particular tools and techniques need to be introduced. For example, methodologies to guide analysis and design are required, agent architectures are needed for the design of individual software components, tools and abstractions are required to enable developers to deal with the complexity of implemented systems, and 11 Agent Technology supporting infrastructure (embracing other relevant, widely used technologies, such as web services) must be integrated. 1.2 Agents as a Source of Technologies Agent technologies span a range of speciﬁc techniques and algorithms for dealing with interactionsindynamic,openenvironments.Theseaddressissuessuchasbalancingreaction and deliberation in individual agent architectures, learning from and about other agents in the environment, eliciting and acting upon user preferences, ﬁnding ways to negotiate and cooperate with other agents, and developing appropriate means of forming and managing coalitions (and other organisations). Moreover, the adoption of agent-based approaches is increasingly inﬂuential in other domains. For example, multi-agent systems are already providing new and more effective methods of resource allocation in complex environments than previous approaches. 1.3 Agents as Simulation Multi-agent systems offer strong models for representing complex and dynamic real-world environments.Forexample,simulationofeconomies,societiesandbiologicalenvironments are typical application areas. ▯▯ ▯▯ ▯▯ ▯ ▯ ▯▯▯ ▯ ▯▯▯▯▯ ▯▯ ▯ ▯ ▯▯▯▯ Figure 1.1: Agent-based computing spans technologies, design and simulation 12 AgentLink Roadmap The use of agent systems to simulate real-world domains may provide answers to complex physical or social problems that would otherwise be unobtainable due to the complexity involved, as in the modelling of the impact of climate change on biological populations, or modelling the impact of public policy options on social or economic behaviour. Agent- based simulation spans: social structures and institutions to develop plausible explanations of observed phenomena, to help in the design of organisational structures, and to inform policy or managerial decisions; physical systems, including intelligent buildings, trafﬁc systems and biological populations; and software systems of all types, currently including eCommerce and information management systems. In addition, multi-agent models can be used to simulate the behaviour of complex computer systems, including multi-agent computer systems. Such simulation models can assist designers and developers of complex computational systems and provide guidance to software engineers responsible for the operational control of these systems. Multi-agent simulation models thus effectively provide a new set of tools for the management of complex adaptive systems, such as large-scale online resource allocation environments. We do not claim that agent systems are simply panaceas for these large problems; rather they have been demonstrated to provide concrete competitive advantages such as: ▯ improving operational robustness with intelligent failure recovery; ▯ reducing sourcing costs by computing the most beneﬁcial acquistion policies in online markets; and ▯ improving efﬁciency of manufacuring processes in dynamic environments. 13 Agent Technology Acklin and International Vehicle Insurance Claims Netherlands-based Acklin BV was asked by a group of three insurance com- panies, from Belgium, the Netherlands and Germany, to help automate their international vehicle claims processing system. At present, European rules require settlement of cross-border insurance claims for international motor accidents within 3 months of the accident. However, the back-ofﬁce sys- tems used by insurance companies are diverse, with data stored and used in different ways. Because of this and because of conﬁdentiality concerns, information between insurance companies is usually transferred manually, with contacts between claim handlers only by phone, fax and email. Acklin developed a multi-agent system, the KIR system, with business rules and logic encoded into discrete agents representing the data sources of the dif- ferent companies involved. This approach means the system can ensure conﬁdentiality, with agent access to data sources mediated through other agents representing the data owners. Access to data sources is only granted to a requesting agent when the relevant permissions are present and for speciﬁed data items. Because some data sources are only accessible dur- ing business hours, agents can also be programmed to operate only within agreed time windows. Moreover, structuring the system as a collection of intelligent components in this way also enables greater system robustness, so that business processes can survive system shutdowns and failures. The deployment of the KIR system immediately reduced the human workload at one of the participating companies by three people, and reduced the total time of identiﬁcation of client and claim from 6 months to 2 minutes! For reasons of security, the KIR system used email for inter-agent communica- tion, and the 2 minutes maximum time is mainly comprised of delays in the email servers and mail communication involved. 14 AgentLink Roadmap 2 Technological Context The growth of the World Wide Web and the rapid rise of eCommerce have led to signiﬁcant efforts to develop standardised software models and technologies to support and enable the engineering of systems involving distributed computation. These efforts are creating a rich and sophisticated context for the development of agent technologies. For example, so-called service-oriented architectures (SOAs) for distributed applications involve the creation of systems based on components, each of which provides pre- deﬁned computational services, and which can then be aggregated dynamically at runtime to create new applications. Other relevant efforts range from low-level wireless communications protocols such as Bluetooth to higher-level web services abstractions and middleware. The development of standard technologies and infrastructure for distributed and eCommerce systems has impacted on the development of agent systems in two major ways. ▯ Many of these technologies provide implementation methods and middleware, ena- bling the easy creation of infrastructures for agent-based systems, such as standard- ised methods for discovery and communication between heterogeneous services. ▯ Applications now enabled by these technologies are becoming increasingly agent- like, and address difﬁcult technical challenges similar to those that have been the focus of multi-agent systems. These include issues such as trust, reputation, obligations, contract management, team formation, and management of large-scale open sys- tems. In terms of providing potential infrastructures for the development of agent systems, technologies of particular relevance include the following. ▯ Base Technologies: ▯ The Extensible Markup Language (XML) is a language for deﬁning mark-up lan- guages and syntactic structures for data formats. Though lacking in machine- readable semantics, XML has been used to deﬁne higher-level knowledge rep- resentations that facilitate semantic annotation of structured documents on the Web. ▯ The Resource Description Format (RDF) is a representation formalism for describ- ing and interchanging metadata. 15 Technological Context ▯ eBusiness: ▯ ebXML aims to standardise XML business speciﬁcations by providing an open XML- based infrastructure enabling the global use of electronic business information in an interoperable, secure and consistent manner. ▯ RosettaNet is a consortium of major technology companies working to create and implement industry-wide eBusiness process standards. RosettaNet standards offer a robust non-proprietary solution, encompassing data dictionaries, an im- plementation framework, and XML-based business message schemas and proc- ess speciﬁcations for eBusiness standardisation. ▯ Universal Plug & Play: ▯ Jini network technology provides simple mechanisms that enable devices to plug together to form an emergent community in which each device pro- vides services that other devices in the community may use. ▯ UPnP offers pervasive peer-to-peer network connectivity of intelligent applianc- es and wireless devices through a distributed, open networking architecture to enable seamless proximity networking in addition to control and data transfer among networked devices. ▯ Web Services: ▯ UDDI is an industry initiative aimed at creating a platform-independent, open framework for describing services and discovering businesses using the Internet. It is a cross-industry effort driven by platform and software providers, marketplace operators and eBusiness leaders. ▯ SOAP provides a simple and lightweight mechanism for exchanging structured and typed information between peers in a decentralised, distributed environ- ment using XML. ▯ WSDL/WS-CDL: WSDL provides an XML grammar for describing network services as collections of communication endpoints capable of exchanging messages, thus enabling the automation of the details involved in applications communi- cation. WS-CDL allows the deﬁnition of abstract interfaces of web services, that is, the business-level conversations or public processes supported by a web service. Conversely, agent-related activities are already beginning to inform development in a numberofthesetechnologyareas,includingtheSemanticWebstandardisationeffortsofthe World Wide Web Consortium (W3C), and the Common Object Request Broker Architecture (CORBA) of the Object Management Group (OMG). Contributions have also come through 16 AgentLink Roadmap the Foundation for Intelligent Physical Agents (FIPA; accepted in 2005 by the IEEE as its eleventh standards committee), which deﬁnes a range of architectural elements similar to those now adopted in the W3C Web Services Architecture speciﬁcations and elsewhere. These developments with regard to the technological context for agent systems are illustrated in Figure 2.1, which presents the main contextual technologies supporting agent systems development. While research in agent technologies has now been active for over a decade, the ﬁgure shows that it is only from 1999, with the appearance of effective service-oriented technologies and pervasive computing technologies, that truly dynamic (ad hoc) networked systems could be built without large investments in establishing the underlying infrastructure. In particular, only with the emergence of Grid computing from 2002, and calls for adaptive wide-scale web service based solutions, is there now a widespread need to provide attractive solutions to the higher-level issues of communication, coordination and security. PRE ▯▯▯▯ ▯▯▯▯ ▯▯▯▯ ▯▯▯▯ ▯▯▯▯ FNBJM5$1 )551 888 9.- %".-▯0*- 08- )NTERNET IP 3%' 4ECHNOLOGIES $03#" %$0. &+# $0.▯ ▯/&5 $ISTRIBUTED▯/BJECT▯ 3.* 4ECHNOLOGIES *$2 ▯*OTUBOU▯.FTT/BQTUFS +95" (OVUFMMB 0EER TO 0EER +JOJ 61O1 3ERVICE /RIENTED▯ +BWB▯4QBDFT 4ECHNOLOGIES #MVFUPPUI 0ERVASIVE▯ 8J'J #OMPUTING 6%%* #1&-▯84 84%- 40"1 7EB▯3ERVICES 0(4" 843' '2)$ Figure 2.1: Agent-related technologies for infrastructure support 17 Technological Context Eurobios and SCA Packaging Many companies ﬁnd themselves under strong pressures to deliver just- in-time high quality products and services, while operating in a highly competitive market. In one of SCA Packaging’s corrugated box plants, customer orders often arrive simultaneously for a range of different boxes, each order with its own colour scheme and speciﬁc printing, and often to be delivered at very short notice. Because of the complexity of factory processes and the difﬁculty of predicting customer behaviour and machine failure, large inventories of ﬁnished goods must therefore be managed. SCA Packaging turned to Eurobios to provide an agent-based modelling solution in order to explore different strategies for reducing stock levels without compromising delivery times, as well as evaluating consequences of changes in the customer base. The agent-based simulation developed by Eurobios allowed the company to reduce warehouse levels by over 35% while maintaining delivery commitments. In general, it is clear that broad technological developments in distributed computation are increasingly addressing problems long explored within the agent research community. There are two inter-related developments here. First, supporting technologies are emerging very quickly. As a consequence, the primary research focus for agent technologies has moved from infrastructure to the higher-level issues concerned with effective coordination and cooperation between disparate services. Second, large numbers of systems are being built and designed using these emerging infrastructures, and are becoming ever more like multi-agent systems; their developers therefore face the same conceptual and technical challenges encountered in the ﬁeld of agent-based computing. 18 AgentLink Roadmap 3 Emerging Trends and Critical Drivers The development of agent technologies has taken place within a context of wider visions for information technology. In addition to the speciﬁc technologies mentioned in the previous section, there are also several key trends and drivers that suggest that agents and agent technologies will be vital. The discussion is not intended to be exhaustive, but instead indicative of the current impetus for use and deployment of agent systems. 3.1 Semantic Web Since it was ﬁrst developed in the early 1990s, the World Wide Web has rapidly and dramatically become a critically important and powerful medium for communication, research and commerce. However, the Web was designed for use by humans, and its power is limited by the ability of humans to navigate the data of different information sources. The Semantic Web is based on the idea that the data on the Web can be deﬁned and linked in such a way that it can be used by machines for the automatic processing and integration of data across different applications (Berners-Lee et al., 2001). This is motivated by the fundamental recognition that, in order for web-based applications to scale, programs must be able to share and process data, particularly when they have been designed independently. The key to achieving this is by augmenting web pages with descriptions of their content in such a way that it is possible for machines to reason automatically about that content. Among the particular requirements for the realisation of the Semantic Web vision are: rich descriptions of media and content to improve search and management; rich descriptions of web services to enable and improve discovery and composition; common interfaces to simplify integration of disparate systems; and a common language for the exchange of semantically-rich information between software agents. It should be clear from this that the Semantic Web demands effort and involvement from the ﬁeld of agent-based computing, and the two ﬁelds are intimately connected. Indeed, the Semantic Web offers a rich breeding ground for both further fundamental research and a whole range of agent applications that can (and should) be built on top of it. 3.2 Web Services and Service Oriented Computing Web services technologies provide a standard means of interoperating between different software applications, running on a variety of different platforms. Speciﬁcations cover a wide range of interoperability issues, from basic messaging, security and architecture, 19 Trends and Drivers to service discovery and the composition of individual services into structured workﬂows. Standards for each of these areas, produced by bodies such as W3C and OASIS, provide a framework for the deployment of component services accessible using HTTP and XML interfaces. These components can subsequently be combined into loosely coupled applications that deliver increasingly sophisticated value-added services. In a more general sense, web services standards serve as a potential convergence point for diverse technology efforts such as eBusiness frameworks (ebXML, RosettaNet, etc), Grid architectures(whicharenowincreasinglybasedonwebservicesinfrastructures)andothers, towards a more general notion of service-oriented architectures (SOA). Here, distributed systems are increasingly viewed as collections of service provider and service consumer components, interlinked by dynamically deﬁned workﬂows. Web services can therefore be realised by agents that send and receive messages, while the services themselves are the resources characterised by the functionality provided. In the same way as agents may perform tasks on behalf of a user, a web service provides this functionality on behalf of its owner, a person or organisation. Web services thus provide a ready-made infrastructure that is almost ideal for use in supporting agent interactions in a multi-agent system. More importantly, perhaps, this infrastructure is widely accepted, standardised, and likely to be the dominant base technology over the coming years. Conversely, an agent-oriented view of web services is gaining increased traction and exposure, since provider and consumer web services environments are naturally seen as a form of agent-based system (Booth et al., 2004). 3.3 Peer-to-Peer Computing Peer-to-peer (P2P) computing covers a wide range of infrastructures, technologies and applications that share a single characteristic: they are designed to create networked applications in which every node (or deployed system) is in some sense equivalent to all others, and application functionality is created by potentially arbitrary interconnection between these peers. The consequent absence of the need for centralised server components to manage P2P systems makes them highly attractive in terms of robustness against failure, ease of deployment, scalability and maintenance (Milojicic et al., 2002). The best known P2P applications include hugely popular ﬁle sharing applications such as GnutellaandBitTorrent,Akamaicontentcaching,groupwareapplications(suchasGroove Networks ofﬁce environments) and Internet telephony applications such as Skype. While the majority of these well-known systems are based on proprietary protocols and platforms, toolkits such as Sun Microsystem’s JXTA provide a wide array of networking features for the development of P2P applications, such as messaging, service advertisement and peer 20 AgentLink Roadmap management features. Standardisation for P2P technologies is also underway within the The UK’s Global Grid Forum (GGF), which now includes a P2P working group established by Intel in 2000. eScience programme has P2P applications display a range of agent-like characteristics, often applying self- organisation techniques in order to ensure continuous operation of the network, and allocated £230M relying on protocol design to encourage correct behaviour of clients. (For example, to Grid-related many commercial e-marketplace systems, such as eBay, include simple credit-reputation systems to reward socially beneﬁcial behaviour). As P2P systems become more complex, computing, an increasing number of agent technologies may also become relevant. These include, while for example: auction mechanism design to provide a rigorous basis to incentivise rational behaviour among clients in P2P networks; agent negotiation techniques to improve the Germany’s level of automation of peers in popular applications; increasingly advanced approaches to trust and reputation; and the application of social norms, rules and structures, as well as D-Grid socialsimulation,inordertobetterunderstandthedynamicsofpopulationsofindependent programme agents. has allocated 3.4 Grid Computing ▯300M, and the The Grid is the high-performance computing infrastructure for supporting large-scale French ACI Grid distributed scientiﬁc endeavour that has recently gained heightened and sustained programme interest from several communities (Foster and Kesselman, 2004). The Grid provides a means of developing eScience applications such as those demanded by, for example, nearly ▯50M. the Large Hadron Collider facility at CERN, engineering design optimisation, bioinformatics and combinatorial chemistry. Yet it also provides a computing infrastructure for supporting more general applications that involve large-scale information handling, knowledge management and service provision. Typically, Grid systems are abstracted into several layers, which might include: a data-computation layer dealing with computational resource allocation, scheduling and execution; an information layer dealing with the representation, storage and access of information; and a knowledge layer, which deals with the way knowledge is acquired, retrieved, published and maintained. The Grid thus refers to an infrastructure that enables the integrated, collaborative use of high-end computers, networks, databases, and scientiﬁc instruments owned and managed by multiple organisations. Grid applications often involve large amounts of data and computer processing, and often require secure resource sharing across organisational boundaries; they are thus not easily handled by today’s Internet and Web infrastructures. The key beneﬁt of Grid computing more generally is ﬂexibility – the distributed system and network can be reconﬁgured on demand in different ways as business needs change, 21 Trends and Drivers Utility Computing The Internet has enabled computational resources to be accessed remotely. Networked resources such as digital information, specialised laboratory equipment and computer processing power may now be shared between users in multiple organisations, located at multiple sites. For example, the emerging Grid networks of scientiﬁc communities enable shared and remote access to advanced equipment such as supercomputers, telescopes and electron microscopes. Similarly, in the commercial IT arena, shared access to computer processing resources has recently drawn the attention of major IT vendors with companies such as HP (“utility computing”), IBM (“on-demand computing”), and Sun (“N1 Strategy”) announcing initiatives in this area. Sharing resources across multiple users, whether commercial or scientiﬁc, allows scientists and IT managers to access resources on a more cost-effective basis, and achieves a closer match between demand and supply of resources. Ensuring efﬁcient use of shared resources in this way will require design, implementation and management of resource-allocation mechanisms in a computational setting. in principle enabling more ﬂexible IT deployment and more efﬁcient use of computing resources (Information Age Partnership, 2004). According to BAE Systems (Gould et al., 2003), while the technology is already in a state in which it can realise these beneﬁts in a single organisational domain, the real value comes from cross-organisation use, through virtual organisations, which require ownership, management and accounting to be handled within trusted partnerships. In economic terms, such virtual organisations provide an appropriate way to develop new products and services in high value markets; this facilitates the notion of service-centric software, which is only now emerging because of the constraints imposed by traditional organisations. As the Information Age Partnership (2004) suggests, the future of the Grid is not in the provision of computing power, but in the provision of information and knowledge in a service-oriented economy. Ultimately, 22 AgentLink Roadmap the success of the Grid will depend on standardisation and the creation of products, and efforts in this direction are already underway from a range of vendors, including Sun, IBM and HP. 3.5 Ambient Intelligence The notion of ambient intelligence has largely arisen through the efforts of the European CommissioninidentifyingchallengesforEuropeanresearchanddevelopmentinInformation Society Technologies (IST Advisory Group, 2002). Aimed at seamless delivery of services and applications, it relies on the areas of ubiquitous computing, ubiquitous communication and intelligent user interfaces. The vision describes an environment of potentially thousands of embedded and mobile devices (or software components) interacting to support user- centred goals and activity, and suggests a component-oriented view of the world in which the components are independent and distributed. The consensus is that autonomy, distribution, adaptation, responsiveness, and so on, are key characteristics of these components, and in this sense they share the same characteristics as agents. Ambient intelligence requires these agents to be able to interact with numerous other agents in the environment around them in order to achieve their goals. Such interactions take place between pairs of agents (in one-to-one collaboration or competition), between groups (in reaching consensus decisions or acting as a team), and between agents and the infrastructure resources that comprise their environments (such as large- scale information repositories). Interactions like these enable the establishment of virtual organisations, in which groups of agents come together to form coherent groups able to achieve overarching objectives. The environment provides the infrastructure that enables ambient intelligence scenarios to be realised. On the one hand, agents offering higher-level services can be distinguished from the physical infrastructure and connectivity of sensors, actuators and networks, for example. On the other hand, they can also be distinguished from the virtual infrastructure needed to support resource discovery, large-scale distributed and robust information repositories(asmentionedabove),andthelogicalconnectivityneededtoenableeffective interactions between large numbers of distributed agents and services, for example. In relation to pervasiveness, it is important to note that scalability (more particularly, device scalability), or the need to ensure that large numbers of agents and services are accommodated, as well as heterogeneity of agents and services, is facilitated by the provision of appropriate ontologies. Addressing all of these aspects will require efforts to provide solutions to issues of operation, integration and visualisation of distributed sensors, ad hoc services and network infrastructure. 23 Trends and Drivers 3.6 Self-* Systems and Autonomic Computing Computational systems that are able to manage themselves have been part of the vision for computer science since the work of Charles Babbage. With the increasing complexity of advanced information technology systems, and the increasing reliance of modern society on these systems, attention in recent years has returned to this. Such systems have come to be called self-* systems and networks (pronounced “self-star”), with the asterisk indicating that a variety of attributes are under consideration. While an agreed deﬁnition of self-* systems is still emerging, aspects of these systems include properties such as: self- awareness, self-organisation, self-conﬁguration, self-management, self-diagnosis, self correction, and self-repair. Such systems abound in nature, from the level of ecosystems, through large primates (such as man) and down to processes inside single cells. Similarly, many chemical, physical, economic and social systems exhibit self-* properties. Thus, the development of computational systems that have self-* properties is increasingly drawing on research in biology, ecology, statistical physics and the social sciences. Recent research on computational self-* systems has tried to formalise some of the ideas from these different disciplines, and to identify algorithms and procedures that could realise various self-* attributes, for example in peer-to-peer networks. One particular approach to self-* systems has become known as autonomic computing, considered below. Computationalself-*systemsandnetworksprovideanapplicationdomainforresearchand development of agent technologies, and also a contribution to agent-based computing theory
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'