![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
This is the first joint working conference between the IFIP Working Groups 11. 1 and 11. 5. We hope this joint conference will promote collaboration among researchers who focus on the security management issues and those who are interested in integrity and control of information systems. Indeed, as management at any level may be increasingly held answerable for the reliable and secure operation of the information systems and services in their respective organizations in the same manner as they are for financial aspects of the enterprise, there is an increasing need for ensuring proper standards of integrity and control in information systems in order to ensure that data, software and, ultimately, the business processes are complete, adequate and valid for intended functionality and expectations of the owner (i. e. the user organization). As organizers, we would like to thank the members of the international program committee for their review work during the paper selection process. We would also like to thank the authors of the invited papers, who added valuable contribution to this first joint working conference. Paul Dowland X. Sean Wang December 2005 Contents Preface vii Session 1 - Security Standards Information Security Standards: Adoption Drivers (Invited Paper) 1 JEAN-NOEL EZINGEARD AND DAVID BIRCHALL Data Quality Dimensions for Information Systems Security: A Theorectical Exposition (Invited Paper) 21 GURVIRENDER TEJAY, GURPREET DHILLON, AND AMITA GOYAL CHIN From XML to RDF: Syntax, Semantics, Security, and Integrity (Invited Paper) 41 C. FARKAS, V. GowADiA, A. JAIN, AND D.
TheSAMOSworkshopisaninternationalgatheringofhighlyquali?edresearchers from academia and industry, sharing ideas in a 3-day lively discussion on the quietandinspiringnorthernmountainsideoftheMediterraneanislandofSamos. The workshopmeeting is one of two co-locatedevents (the other event being the IC-SAMOS).Asatradition, theworkshopfeaturespresentationsinthemorning, while after lunch all kinds of informal discussions and nut-cracking gatherings take place. The workshop is unique in the sense that not only solved research problems are presented and discussed but also (partly) unsolved problems and in-depth topical reviews can be unleashed in the scienti?c arena. Consequently, the workshopprovidesthe participantswithanenvironmentwherecollaboration rather than competition is fostered. The SAMOS conference and workshop were established in 2001 by Stamatis Vassiliadis with the goals outlined above in mind, and located on Samos, one of the most beautiful islands of the Aegean. The rich historical and cultural environment of the island, coupled with the intimate atmosphereandthe slowpaceofasmallvillagebythe seainthe middle of the Greek summer, provide a very conducive environment where ideas can be exchanged and shared freely
This book is the fifth volume of the CoreGRID series. Organized jointly with the Euro-Par 2007 conference, The CoreGRID Symposium intends to become the premiere European event on Grid Computing. The aim of this symposium is to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer Computing. The book includes all aspects of Grid Computing including service infrastructure. It is designed for a professional audience composed of researchers and practitioners in industry. This volume is also suitable for advanced-level students in computer science.
With the fast development of networking and software technologies, information processing infrastructure and applications have been growing at an impressive rate in both size and complexity, to such a degree that the design and development of high performance and scalable data processing systems and networks have become an ever-challenging issue. As a result, the use of performance modeling and m- surementtechniquesas a critical step in designand developmenthas becomea c- mon practice. Research and developmenton methodologyand tools of performance modeling and performance engineering have gained further importance in order to improve the performance and scalability of these systems. Since the seminal work of A. K. Erlang almost a century ago on the mod- ing of telephone traf c, performance modeling and measurement have grown into a discipline and have been evolving both in their methodologies and in the areas in which they are applied. It is noteworthy that various mathematical techniques were brought into this eld, including in particular probability theory, stochastic processes, statistics, complex analysis, stochastic calculus, stochastic comparison, optimization, control theory, machine learning and information theory. The app- cation areas extended from telephone networks to Internet and Web applications, from computer systems to computer software, from manufacturing systems to s- ply chain, from call centers to workforce management.
Embedded systems take over complex control and data processing tasks in diverse application ?elds such as automotive, avionics, consumer products, and telec- munications. They are the primary driver for improving overall system safety, ef?ciency, and comfort. The demand for further improvement in these aspects can only be satis?ed by designing embedded systems of increasing complexity, which in turn necessitates the development of new system design methodologies based on speci?cation, design, and veri?cation languages. The objective of the book at hand is to provide researchers and designers with an overview of current research trends, results, and application experiences in c- puter languages for embedded systems. The book builds upon the most relevant contributions to the 2008 conference Forum on Design Languages (FDL), the p- mier international conference specializing in this ?eld. These contributions have been selected based on the results of reviews provided by leading experts from - search and industry. In many cases, the authors have improved their original work by adding breadth, depth, or explanation.
Component Models and Systems for Grid Applications is the essential reference for the most current research on Grid technologies. This first volume of the CoreGRID series addresses such vital issues as the architecture of the Grid, the way software will influence the development of the Grid, and the practical applications of Grid technologies for individuals and businesses alike. Part I of the book, "Application-Oriented Designs," focuses on development methodology and how it may contribute to a more component-based use of the Grid. "Middleware Architecture," the second part, examines portable Grid engines, hierarchical infrastructures, interoperability, as well as workflow modeling environments. The final part of the book, "Communication Frameworks," looks at dynamic self-adaptation, collective operations, and higher-order components. With Component Models and Systems for Grid Applications, editors Vladimir Getov and Thilo Kielmann offer the computing professional and the computing researcher the most informative, up-to-date, and forward-looking thoughts on the fast-growing field of Grid studies.
Reliability and Risk Issues in Large Scale Safety-critical Digital Control Systems provides a comprehensive coverage of reliability issues and their corresponding countermeasures in the field of large-scale digital control systems, from the hardware and software in digital systems to the human operators who supervise the overall process of large-scale systems. Unlike other books which examine theories and issues in individual fields, this book reviews important problems and countermeasures across the fields of software reliability, software verification and validation, digital systems, human factors engineering and human reliability analysis. Divided into four sections dealing with software reliability, digital system reliability, human reliability and human operators in large-scale digital systems, the book offers insights from professional researchers in each specialized field in a diverse yet unified approach.
This book provides a panoramic view of theory and applications of Ageing and Dependence in the use of mathematical methods in reliability and survival analysis. Ageing and dependence are important characteristics in reliability and survival analysis. They affect decisions with regard to maintenance, repair/replacement, price setting, warranties, medical studies, and other areas. Most of the works containing the topics covered here are theoretical in nature. However, this book offers applications, exercises, and examples. It serves as a reference for professors and researchers involved in reliability and survival analysis.
th This volume contains papers presented during 13 International Conference on Inf- mation Systems Development - Advances in Theory, Practice and Education (ISD'2004), held in Vilnius, Lithuania, September 9-11, 2004. The intended audience for this book comprises researchers and practitioners interested in current trends in the InformationS- tems Development (ISD) ?eld. Papers cover a wide range of topics: ISD methodologies, methodengineering, businessandISmodelling, websystemsengineering, databaserelated issues, informationanalysisanddatamining, qualityassessment, costingmethods, security issues, impact of organizational environment, and motivation and job satisfaction among IS developers. The selection of papers was carried out by the International Program C- mittee. All papers were reviewed in advance by three reviewers and evaluated according to their relevance, originality and presentation quality. Papers were evaluated only on their own merits, independent of other submissions. Out of 117 submissions Program Comm- tee selected 75 research papers to be presented at the Conference. 39 best papers and 5 papers presented by invited speakers are published in this volume. th The13 InternationalConferenceonInformationSystemsDevelopmentcontinuesthe tradition started with the ?rst Polish-Scandinavian Seminar on Current Trends in Infor- tion Systems Development Methodologies, held in Gdansk, Poland in 1988. Through the years this seminar has evolved into one of most prestigious conferences in the ?eld. ISD Conferenceprovidesan internationalforumfor the exchangeof ideasbetween the research community and practitioners and offers a venue where ISD related educational issues are discussed. ISD progresses rapidly, continually creating new challenges for the professionals - volved. New concepts and approaches emerge in research as well as in practice.
The use of parallel programming and architectures is essential for simulating and solving problems in modern computational practice. There has been rapid progress in microprocessor architecture, interconnection technology and software devel- ment, which are in?uencing directly the rapid growth of parallel and distributed computing. However, in order to make these bene?ts usable in practice, this dev- opment must be accompanied by progress in the design, analysis and application aspects of parallel algorithms. In particular, new approaches from parallel num- ics are important for solving complex computational problems on parallel and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, progr- ming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerical integ- tion, number theory and their applications in computer simulations, which together form the kernel of the monograph. We expect that the book will be of interest to scientists working on parallel computing, doctoral students, teachers, engineers and mathematicians dealing with numerical applications and computer simulations of natural phenomena.
This book came into being inthe form oflecture notes for thesubject Infor- tion technology management (IT management) at the Twente University inthe Netherlands. Since 1995 this subject is part of the Master's degree of the course Business Management and Information Technology. Over a decade of teaching, this bookdevelopedinto what it istoday. The book gives an idea of how organizations should organize their - formationandcommunicationtechnologyfacilitiesinordertobeabletosay"IT does not matter." Management and the organization of IT are only conveniences within day-to-day operations and enablers, for organizations that want to supply other products and services. The book has the following starting points: (a) The IT support of products and services of organizations makes fu- tional and performance demandsontheIT facilities. In order to beable tomeettheserequirementsoptimally, anITarchitectureisrequired.The IT services and products are supplied within this architecture. (b) Controlling IT is part of normal operational management. This means that: -at setting up the IT facilities the principles of logistics and operations management apply; -the information, neededfor controlling a process, makes demandson the set-up of the information service process. The question is: -whether someone is authorized to supplythe data; -whether the data correspondswith thephysically present objects and -whether the given data is correct and complete. (c) A distinction is made between both the IT demand and the IT supply organization. Both organizations have to be set up. Methods indicate, xi xii Preface which processes have to be in place in these organizations and each of these processes has ?nancial, personnel, legal and security aspects.
Three approaches can be applied to determine the performance of parallel and distributed computer systems: measurement, simulation, and mathematical methods. This book introduces various network architectures for parallel and distributed systems as well as for systems-on-chips, and presents a strategy for developing a generator for automatic model derivation. It will appeal to researchers and students in network architecture design and performance analysis.
Operating systems kernels are central to the functioning of computers. Security of the overall system, as well as its reliability and responsiveness, depend upon the correct functioning of the kernel. This unique approach - presenting a formal specification of a kernel - starts with basic constructs and develops a set of kernels; proofs are included as part of the text.
The authors have here put together the first reference on all aspects of testing and validating service-oriented architectures. With contributions by leading academic and industrial research groups it offers detailed guidelines for the actual validation process. Readers will find a comprehensive survey of state-of-the-art approaches as well as techniques and tools to improve the quality of service-oriented applications. It also includes references and scenarios for future research and development.
A family of internationally popular microcontrollers, the Atmel AVR microcontroller series is a low-cost hardware development platform suitable for an educational environment. Until now, no text focused on the assembly language programming of these microcontrollers. Through detailed coverage of assembly language programming principles and techniques, Some Assembly Required: Assembly Language Programming with the AVR Microcontroller teaches the basic system capabilities of 8-bit AVR microcontrollers. The text illustrates fundamental computer architecture and programming structures using AVR assembly language. It employs the core AVR 8-bit RISC microcontroller architecture and a limited collection of external devices, such as push buttons, LEDs, and serial communications, to describe control structures, memory use and allocation, stacks, and I/O. Each chapter contains numerous examples and exercises, including programming problems. By studying assembly languages, computer scientists gain an understanding of the functionality of basic processors and how their capabilities support high level languages and applications. Exploring this connection between hardware and software, this book provides a foundation for understanding compilers, linkers, loaders, and operating systems in addition to the processors themselves.
It is our great pleasure to present the proceedings of the 16th International ConferenceonAnalyticalandStochasticModellingTechniquesandApplications (ASMTA 2009) that took place in Madrid. The conference has become an established annual event in the agenda of the experts of analytical modelling and performance evaluation in Europe and internationally. This year the proceedings continued to be published as part of Springer's prestigiousLecture Notes in Computer Science (LNCS) series. This is another sign of the growing con?dence in the quality standards and procedures followed in the reviewing process and the program compilation. Following the traditions of the conference, ASMTA 2009, was honored to have a distinguished keynote speaker in the person of Kishor Trivedi. Professor Trivedi holds the Hudson Chair in the Department of Electrical and Computer EngineeringatDukeUniversity, Durham, NC, USA. HeistheDuke-SiteDirector of an NSF Industry-University Cooperative Research Center between NC State University and Duke University for carrying out applied research in computing and communications. He has been on the Duke faculty since 1975. He is the author of a well-known text entitled Probability and Statistics with Reliability, Queuing and Computer Science Applications, published by Prentice-Hall, the secondeditionofwhichhasjustappeared. Hehasalsopublishedtwootherbooks entitled Performance and Reliability Analysis of Computer Systems, published by Kluwer Academic Publishers, and Queueing Networks and Markov Chains, by John Wiley. He is also known for his work on the modelling and analysis of software aging and rejuvenation. The conference maintained the tradition of high-quality programs with an acceptance rate of about 40%.
OpenMP is an application programming interface (API) that is widely accepted as a de facto standard for high-level shared-memory parallel programming. It is a portable, scalable programming model that provides a simple and ?exible interface for developing shared-memory parallel applications in Fortran, C, and C++. Since its introduction in 1997, OpenMP has gained support from the - jority of high-performance compiler and hardware vendors. Under the direction of the OpenMP Architecture Review Board (ARB), the OpenMP speci?cation is undergoing further improvement. Active research in OpenMP compilers, r- time systems, tools, and environments continues to drive OpenMP evolution.To provideaforumforthedisseminationandexchangeofinformationaboutand- periences with OpenMP, the community of OpenMP researchersand developers in academia and industry is organized under cOMPunity (www.compunity.org). This organization has held workshops on OpenMP since 1999. This book contains the proceedings of the 5th International Workshop on OpenMP held in Dresden in June 2009. With sessions on tools, benchmarks, applications, performance and runtime environments it covered all aspects of the current use of OpenMP. In addition, several contributions presented p- posed extensions to OpenMP and evaluated reference implementations of those extensions. An invited talk provided the details on the latest speci?cation dev- opment inside the Architecture Review Board. Together with the two keynotes about OpenMP on hardware accelerators and future generation processors it demonstrated that OpenMP is suitable for future generation systems.
Knowledge science is an emerging discipline resulting from the demands of a knowledge-based economy and information revolution. Explaining how to improve our knowledge-based society, Knowledge Science: Modeling the Knowledge Creation Process addresses problems in collecting, synthesizing, coordinating, and creating knowledge. The book introduces several key concepts in knowledge science: Knowledge technology, which encompasses classification, representation, modeling, identification, acquisition, searching, organization, storage, conversion, and dissemination Knowledge management, which covers three different yet related areas (knowledge assets, knowing processes, knower relations) Knowledge discovery and data mining, which combine databases, statistics, machine learning, and related areas to discover and extract valuable knowledge from large volumes of data Knowledge synthesis, knowledge justification, and knowledge construction, which are important in solving real-life problems Specialists in decision science, artificial intelligence, systems engineering, behavioral science, and management science, the book's contributors present their own original ideas, including an Oriental systems philosophy, a new episteme in the knowledge-based society, and a theory of knowledge construction. They emphasize the importance of systemic thinking for developing a better society in the current knowledge-based era.
The First International Workshop on Traffic Monitoring and Analysis (TMA 2009) was an initiative from the COST Action IC0703 "Data Traffic Monitoring and Analysis: Theory, Techniques, Tools and Applications for the Future Networks" (www.cost-tma.eu). The COST program is an intergovernmental framework for European Cooperation in Science and Technology, allowing the coordination of nationally funded research on a European level. Each COST Action contributes to reducing the fragmentation in research and opening the European Research Area to cooperation worldwide. Traffic monitoring and analysis (TMA) is now an important research topic within the field of networking. It involves many research groups worldwide that are coll- tively advancing our understanding of the Internet. The importance of TMA research is motivated by the fact that modern packet n- works are highly complex and ever-evolving objects. Understanding, developing and managing such environments is difficult and expensive in practice. Traffic monitoring is a key methodology for understanding telecommunication technology and improving its operation, and the recent advances in this field suggest that evolved TMA-based techniques can play a key role in the operation of real networks. Moreover, TMA offers a basis for prevention and response in network security, as typically the det- tion of attacks and intrusions requires the analysis of detailed traffic records. On the more theoretical side, TMA is an attractive research topic for many reasons.
in the algorithmic and foundational aspects, high-level approaches as well as more applied and technology-related issues regarding tools and applications of wireless sensor networks. June 2009 Jie Wu Viktor K. Prasanna Ivan Stojmenovic Message from the Program Chair This proceedings volume includes the accepted papers of the 5th International Conference on Distributed Computing in Sensor Systems. This year we int- duced some changes in the composition of the three tracks to increase cro- disciplinary interactions. The Algorithms track was enhanced to include topics pertaining to performance analysis and network optimization and renamed "- gorithms and Analysis. " The Systems and Applications tracks, previously s- arate, were combined into a single track. And a new track was introduced on "Signal Processing and Information Theory. " DCOSS 2009 received 116 submissions for the three tracks. After a thorough reviewprocess, inwhichatleastthreereviewsweresolicitedforallpapers, atotal of 26 papers were accepted. The research contributions in this proceedings span many aspects of sensor systems, including energy-e?cient mechanisms, tracking and surveillance, activity recognition, simulation, query optimization, network coding, localization, application development, data and code dissemination. Basedonthereviews, wealsoidenti?edthebestpaperfromeachtrack, which are as follows: BestpaperintheAlgorithmsandAnalysistrack: "E?cientSensorPlacement for Surveillance Problems" by Pankaj Agarwal, Esther Ezra and Shashidhara Ganjugunte. Best paper in the Applications and Systems track: "Optimal Allocation of Time-Resources for Multihypothesis Activity-Level Detection," by Gautam Thatte, ViktorRozgic, MingLi, SabyasachiGhosh, UrbashiMitra, ShriNarayanan, Murali Annavaram and Donna Spruijt-Metz. Best paper in the Signal Processing and Information Theory track: "D- tributed Computation of Likelihood Maps for Target Tracking" by Jonathan Gallagher, Randolph Moses and Emre Ertin.
Parameterized complexity theory is a recent branch of computational complexity theory that provides a framework for a refined analysis of hard algorithmic problems. The central notion of the theory, fixed-parameter tractability, has led to the development of various new algorithmic techniques and a whole new theory of intractability. This book is a state-of-the-art introduction to both algorithmic techniques for fixed-parameter tractability and the structural theory of parameterized complexity classes, and it presents detailed proofs of recent advanced results that have not appeared in book form before. Several chapters are each devoted to intractability, algorithmic techniques for designing fixed-parameter tractable algorithms, and bounded fixed-parameter tractability and subexponential time complexity. The treatment is comprehensive, and the reader is supported with exercises, notes, a detailed index, and some background on complexity theory and logic. The book will be of interest to computer scientists, mathematicians and graduate students engaged with algorithms and problem complexity.
Euro-Par is an annual series of international conferences dedicated to the p- motion and advancement of all aspects of parallel and distributed computing. th Euro-Par 2009 was the 15 edition in this conference series. Througout the years, the Euro-Par conferences have always attracted high-quality submissions and have become one of the established conferences in the area of parallel and distributed processing. Built upon the success of the annual conferences and in order to accommodate the needs of special interest groups (among the conf- ence participants), starting from 2006, a series of workshopsin conjunction with the Euro-Par main conference have been organized. This was the ?fth year in which workshops were organized within the Euro-Par conference format. The workshops focus on advanced specialized topics in parallel and d- tributed computing. These topics re?ect new scienti?c and technological dev- opments. While the community for such new and speci?c developments is still small and the topics have yet to become mature, the Euro-Par conference o?ers a platform in the form of a workshop to exchange ideas and discuss cooperation opportunities. The workshops in the past four years have been very successful. The number ofworkshopproposalsandthenumberof?nallyacceptedworkshopshavegra- ally increasedsince 2006.In 2008, nine workshopswereorganizedin conjunction with the main Euro-Par conference. In 2009, there were again nine workshop
These proceedings are compiled from revised submissions presented at RV 2008, the 8th InternationalWorkshopon Runtime Veri?cationheld onMarch30, 2008 in Budapest, Hungary, as a satellite event of ETAPS 2008. There were 27 submissions. Each submission was reviewed by at least three ProgramCommitteemembers.Thecommitteedecidedtoacceptninepapers.This volume also includes two contributions by the invited speakers Jean Goubault- Larrecq(LSV/ENSCachan)on"ASmellofOrchids"andJohnRushby(SRI)on "RuntimeCerti?cation." We would like to thank the members of the Program Committee and the additional referees for their timely reviewing and lively participation in the s- sequent discussion-the quality of the contributions herein is due to their e?orts and expertise. We would like to thank the local organizers of ETAPS 2008 for facilitating this workshop. We would also like to thank the Technical University of Munich for their ?nancial support. Last but not least, we thank the parti- pants of RV 2008 for the stimulating discussions during the workshop and the authors for re?ecting this discussion in their revised papers. We acknowlege the e?ort of the EasyChair support team.
The Workshop on Self-sustaining Systems (S3) is a forum for the discussion of topics relating to computer systems and languages that are able to bootstrap, implement, modify, and maintain themselves. One property of these systems is that their implementation is based onsmall but powerfulabstractions;examples include (amongst others) Squeak/Smalltalk, COLA, Klein/Self, PyPy/Python, Rubinius/Ruby, andLisp.Suchsystemsaretheenginesoftheirownreplacement, giving researchers and developers great power to experiment with, and explore future directions from within, their own small language kernels. S3 took place on May 15-16, 2008 at the Hasso-Plattner-Institute (HPI) in Potsdam, Germany. It was an exciting opportunity for researchers and prac- tioners interested in self-sustaining systems to meet and share their knowledge, experience, and ideas for future research and development. S3 provided an - portunity for a community to gather and discuss the need for self-sustainability in software systems, and to share and explore thoughts on why such systems are needed and how they can be created and deployed. Analogies were made, for example, with evolutionary cycles, and with urban design and the subsequent inevitable socially-driven change. TheS3participantsleftwithagreatersenseofcommunityandanenthusiasm for probing more deeply into this subject. We see the need for self-sustaining systems becoming critical not only to the developer's community, but to e- users in business, academia, learning and play, and so we hope that this S3 workshop will become the ?rst of many.
The Second International Conference on High-Performance Computing and Appli- tions (HPCA 2009) was a follow-up event of the successful HPCA 2004. It was held in Shanghai, a beautiful, active, and modern city in China, August 10-12, 2009. It served as a forum to present current work by researchers and software developers from around the world as well as to highlight activities in the high-performance c- puting area. It aimed to bring together research scientists, application pioneers, and software developers to discuss problems and solutions and to identify new issues in this area. This conference emphasized the development and study of novel approaches for high-performance computing, the design and analysis of high-performance - merical algorithms, and their scientific, engineering, and industrial applications. It offered the conference participants a great opportunity to exchange the latest research results, heighten international collaboration, and discuss future research ideas in HPCA. In addition to 24 invited presentations, the conference received over 300 contr- uted submissions from over ten countries and regions worldwide, about 70 of which were accepted for presentation at HPCA 2009. The conference proceedings contain some of the invited presentations and contributed submissions, and cover such research areas of interest as numerical algorithms and solutions, high-performance and grid c- puting, novel approaches to high-performance computing, massive data storage and processing, hardware acceleration, and their wide applications. |
You may like...
Prisoner 913 - The Release Of Nelson…
Riaan de Villiers, Jan-Ad Stemmet
Paperback
R542
Discovery Miles 5 420
Natural Wood Ornaments - for Furniture…
Gleason Wood Ornament Company (Grand
Hardcover
R664
Discovery Miles 6 640
|