![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
This unique book examines up-to-the-minute uses of technology in financial markets and then explains how you can profit from that knowledge. To participate in mainstream .NET development, you must address the changes in financial markets by using the most sophisticated tools available, Microsoft .NET technology. Software developers and architects, IT pros, and tech-savvy business users alike will find this book comprehensive and relevant. Each chapter presents problems and solutions that cover business aspects and relevant .NET features. Each aspect of .NET is analyzed in its proper context, so you'll understand why it is relevant and applicable in a real-life business case.
The explosion of information technology has led to substantial growth of web-accessible linguistic data in terms of quantity, diversity and complexity. These resources become even more useful when interlinked with each other to generate network effects. The general trend of providing data online is thus accompanied by newly developing methodologies to interconnect linguistic data and metadata. This includes linguistic data collections, general-purpose knowledge bases (e.g., the DBpedia, a machine-readable edition of the Wikipedia), and repositories with specific information about languages, linguistic categories and phenomena. The Linked Data paradigm provides a framework for interoperability and access management, and thereby allows to integrate information from such a diverse set of resources. The contributions assembled in this volume illustrate the band-width of applications of the Linked Data paradigm for representative types of language resources. They cover lexical-semantic resources, annotated corpora, typological databases as well as terminology and metadata repositories. The book includes representative applications from diverse fields, ranging from academic linguistics (e.g., typology and corpus linguistics) over applied linguistics (e.g., lexicography and translation studies) to technical applications (in computational linguistics, Natural Language Processing and information technology). This volume accompanies the Workshop on Linked Data in Linguistics 2012 (LDL-2012) in Frankfurt/M., Germany, organized by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). It assembles contributions of the workshop participants and, beyond this, it summarizes initial steps in the formation of a Linked Open Data cloud of linguistic resources, the Linguistic Linked Open Data cloud (LLOD).
The aim of this book is to present the mathematical theory and the know-how to make computer programs for the numerical approximation of Optimal Control of PDE's. The computer programs are presented in a straightforward generic language. As a consequence they are well structured, clearly explained and can be translated easily into any high level programming language. Applications and corresponding numerical tests are also given and discussed. To our knowledge, this is the first book to put together mathematics and computer programs for Optimal Control in order to bridge the gap between mathematical abstract algorithms and concrete numerical ones. The text is addressed to students and graduates in Mathematics, Mechanics, Applied Mathematics, Numerical Software, Information Technology and Engineering. It can also be used for Master and Ph.D. programs.
This book celebrates Michael Stonebraker's accomplishments that led to his 2014 ACM A.M. Turing Award "for fundamental contributions to the concepts and practices underlying modern database systems." The book describes, for the broad computing community, the unique nature, significance, and impact of Mike's achievements in advancing modern database systems over more than forty years. Today, data is considered the world's most valuable resource, whether it is in the tens of millions of databases used to manage the world's businesses and governments, in the billions of databases in our smartphones and watches, or residing elsewhere, as yet unmanaged, awaiting the elusive next generation of database systems. Every one of the millions or billions of databases includes features that are celebrated by the 2014 Turing Award and are described in this book. Why should I care about databases? What is a database? What is data management? What is a database management system (DBMS)? These are just some of the questions that this book answers, in describing the development of data management through the achievements of Mike Stonebraker and his over 200 collaborators. In reading the stories in this book, you will discover core data management concepts that were developed over the two greatest eras (so far) of data management technology. The book is a collection of 36 stories written by Mike and 38 of his collaborators: 23 world-leading database researchers, 11 world-class systems engineers, and 4 business partners. If you are an aspiring researcher, engineer, or entrepreneur you might read these stories to find these turning points as practice to tilt at your own computer-science windmills, to spur yourself to your next step of innovation and achievement.
For introductory courses in Python Programming and Data Structures. A fundamentals first approach to programming helps students create efficient, elegant code. Introduction to Python Programming and Data Structures introduces students to basic programming concepts using a fundamentals-first approach that prepares students to learn object-oriented programming and advanced Python programming. This approach presents programming concepts and techniques that include control statements, loops, functions, and arrays before designing custom classes. Students learn basic logic and programming concepts prior to moving into object-oriented and GUI programming. The content incorporates a wide variety of problems with various levels of difficulty and covers many application areas to engage and motivate students.
The ubiquitous nature of the Internet of Things allows for enhanced connectivity between people in modern society. When applied to various industries, these current networking capabilities create opportunities for new applications. Internet of Things and Advanced Application in Healthcare is a critical reference source for emerging research on the implementation of the latest networking and technological trends within the healthcare industry. Featuring in-depth coverage across the broad scope of the Internet of Things in specialized settings, such as context-aware computing, reliability, and healthcare support systems, this publication is an ideal resource for professionals, researchers, upper-level students, practitioners, and technology developers seeking innovative material on the Internet of Things and its distinct applications. Topics Covered: Assistive Technologies Context-Aware Computing Systems Health Risk Management Healthcare Support Systems Reliability Concerns Smart Healthcare Wearable Sensors
While compilers for high-level programming languages are large complex software systems, they have particular characteristics that differentiate them from other software systems. Their functionality is almost completely well-defined - ideally there exist complete precise descriptions of the source and target languages. Additional descriptions of the interfaces to the operating system, programming system and programming environment, and to other compilers and libraries are often available. The book deals with the optimization phase of compilers. In this phase, programs are transformed in order to increase their efficiency. To preserve the semantics of the programs in these transformations, the compiler has to meet the associated applicability conditions. These are checked using static analysis of the programs. In this book the authors systematically describe the analysis and transformation of imperative and functional programs. In addition to a detailed description of important efficiency-improving transformations, the book offers a concise introduction to the necessary concepts and methods, namely to operational semantics, lattices, and fixed-point algorithms. This book is intended for students of computer science. The book is supported throughout with examples, exercises and program fragments.
Evolutionary Algorithms and Agricultural Systems deals with the practical application of evolutionary algorithms to the study and management of agricultural systems. The rationale of systems research methodology is introduced, and examples listed of real-world applications. It is the integration of these agricultural systems models with optimization techniques, primarily genetic algorithms, which forms the focus of this book. The advantages are outlined, with examples of agricultural models ranging from national and industry-wide studies down to the within-farm scale. The potential problems of this approach are also discussed, along with practical methods of resolving these problems. Agricultural applications using alternate optimization techniques (gradient and direct-search methods, simulated annealing and quenching, and the tabu search strategy) are also listed and discussed. The particular problems and methodologies of these algorithms, including advantageous features that may benefit a hybrid approach or be usefully incorporated into evolutionary algorithms, are outlined. From consideration of this and the published examples, it is concluded that evolutionary algorithms are the superior method for the practical optimization of models of agricultural and natural systems. General recommendations on robust options and parameter settings for evolutionary algorithms are given for use in future studies. Evolutionary Algorithms and Agricultural Systems will prove useful to practitioners and researchers applying these methods to the optimization of agricultural or natural systems, and would also be suited as a text for systems management, applied modeling, or operations research.
Strategies for Quasi-Monte Carlo builds a framework to design and analyze strategies for randomized quasi-Monte Carlo (RQMC). One key to efficient simulation using RQMC is to structure problems to reveal a small set of important variables, their number being the effective dimension, while the other variables collectively are relatively insignificant. Another is smoothing. The book provides many illustrations of both keys, in particular for problems involving Poisson processes or Gaussian processes. RQMC beats grids by a huge margin. With low effective dimension, RQMC is an order-of-magnitude more efficient than standard Monte Carlo. With, in addition, certain smoothness - perhaps induced - RQMC is an order-of-magnitude more efficient than deterministic QMC. Unlike the latter, RQMC permits error estimation via the central limit theorem. For random-dimensional problems, such as occur with discrete-event simulation, RQMC gets judiciously combined with standard Monte Carlo to keep memory requirements bounded. This monograph has been designed to appeal to a diverse audience, including those with applications in queueing, operations research, computational finance, mathematical programming, partial differential equations (both deterministic and stochastic), and particle transport, as well as to probabilists and statisticians wanting to know how to apply effectively a powerful tool, and to those interested in numerical integration or optimization in their own right. It recognizes that the heart of practical application is algorithms, so pseudocodes appear throughout the book. While not primarily a textbook, it is suitable as a supplementary text for certain graduate courses. As a reference, it belongs on the shelf of everyone with a serious interest in improving simulation efficiency. Moreover, it will be a valuable reference to all those individuals interested in improving simulation efficiency with more than incremental increases.
When discussing classification, support vector machines are known to be a capable and efficient technique to learn and predict with high accuracy within a quick time frame. Yet, their black box means to do so make the practical users quite circumspect about relying on it, without much understanding of the how and why of its predictions. The question raised in this book is how can this 'masked hero' be made more comprehensible and friendly to the public: provide a surrogate model for its hidden optimization engine, replace the method completely or appoint a more friendly approach to tag along and offer the much desired explanations? Evolutionary algorithms can do all these and this book presents such possibilities of achieving high accuracy, comprehensibility, reasonable runtime as well as unconstrained performance.
Digital forensics deals with the acquisition, preservation, examination, analysis and presentation of electronic evidence. Networked computing, wireless communications and portable electronic devices have expanded the role of digital forensics beyond traditional computer crime investigations. Practically every crime now involves some aspect of digital evidence; digital forensics provides the techniques and tools to articulate this evidence. Digital forensics also has myriad intelligence applications. Furthermore, it has a vital role in information assurance - investigations of security breaches yield valuable information that can be used to design more secure systems. Advances in Digital Forensics V describes original research results and innovative applications in the discipline of digital forensics. In addition, it highlights some of the major technical and legal issues related to digital evidence and electronic crime investigations. The areas of coverage include: themes and issues, forensic techniques, integrity and privacy, network forensics, forensic computing, investigative techniques, legal issues and evidence management. This book is the fifth volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.9 on Digital Forensics, an international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in digital forensics. The book contains a selection of twenty-three edited papers from the Fifth Annual IFIP WG 11.9 International Conference on Digital Forensics, held at the National Center for Forensic Science, Orlando, Florida, USA in the spring of 2009. Advances in Digital Forensics V is an important resource for researchers, faculty members and graduate students, as well as for practitioners and individuals engaged in research and development efforts for the law enforcement and intelligence communities.
Real-Time Systems Engineering and Applications is a well-structured collection of chapters pertaining to present and future developments in real-time systems engineering. After an overview of real-time processing, theoretical foundations are presented. The book then introduces useful modeling concepts and tools. This is followed by concentration on the more practical aspects of real-time engineering with a thorough overview of the present state of the art, both in hardware and software, including related concepts in robotics. Examples are given of novel real-time applications which illustrate the present state of the art. The book concludes with a focus on future developments, giving direction for new research activities and an educational curriculum covering the subject. This book can be used as a source for academic and industrial researchers as well as a textbook for computing and engineering courses covering the topic of real-time systems engineering.
Introduction or Why I wrote this book N the fallof 1997 a dedicated troff user e-rnalled me the macros he used to typeset his books. 1took one look inside his fileand thought, "I can do I this;It'sjustcode. " Asan experiment1spent aweekand wrote a Cprogram and troff macros which formatted and typeset a membership directory for a scholarly society with approximately 2,000 members. When 1 was done, I could enter two commands, and my program and troff would convert raw membershipdata into 200 pages ofPostScriptin 35 seconds. Previously, it had taken me several days to prepare camera-readycopy for the directory using a word processor. For completeness 1sat down and tried to write 1EXmacros for the typesetting. 1failed. Although ninety-five percent of my macros worked, I was unable to prepare the columns the project required. As my frustration grew, 1began this book-mentally, in myhead-as an answer to the question, "Why is 'lEX so hard to learn?" Why use Tgx? Lest you accuse me of the old horse and cart problem, 1should address the question, "Why use 1EX at all?" before 1explain why 'lEX is hard. Iuse 'lEXfor the followingreasons. Itisstable, fast, free, and it uses ASCII. Ofcourse, the most important reason is: 'lEX does a fantastic job. Bystable, I mean it is not likely to change in the next 10 years (much less the next one or two), and it is free of bugs. Both ofthese are important.
Multilevel decision theory arises to resolve the contradiction between increasing requirements towards the process of design, synthesis, control and management of complex systems and the limitation of the power of technical, control, computer and other executive devices, which have to perform actions and to satisfy requirements in real time. This theory rises suggestions how to replace the centralised management of the system by hierarchical co-ordination of sub-processes. All sub-processes have lower dimensions, which support easier management and decision making. But the sub-processes are interconnected and they influence each other. Multilevel systems theory supports two main methodological tools: decomposition and co-ordination. Both have been developed, and implemented in practical applications concerning design, control and management of complex systems. In general, it is always beneficial to find the best or optimal solution in processes of system design, control and management. The real tendency towards the best (optimal) decision requires to present all activities in the form of a definition and then the solution of an appropriate optimization problem. Every optimization process needs the mathematical definition and solution of a well stated optimization problem. These problems belong to two classes: static optimization and dynamic optimization. Static optimization problems are solved applying methods of mathematical programming: conditional and unconditional optimization. Dynamic optimization problems are solved by methods of variation calculus: Euler Lagrange method; maximum principle; dynamical programming."
Communication protocols form the operational basis of computer networks and tele communication systems. They are behavior conventions that describe how com munication systems inter act with each other, defining the temporal order of the interactions and the formats of the data units exchanged - essentially they determine the efficiency and reliability of computer networks. Protocol Engineering is an important discipline covering the design, validation, and implementation of communication protocols. Part I of this book is devoted to the fundamentals of communication protocols, describing their working principles and implicitly also those of computer networks. The author introduces the concepts of service, protocol, layer, and layered architecture, and introduces the main elements required in the description of protocols using a model language. He then presents the most important protocol functions. Part II deals with the description of communication proto cols, offering an overview of the various formal methods, the essence of Protocol Engineering. The author introduces the fundamental description methods, such as finite state machines, Petri nets, process calculi, and temporal logics, that are in part used as semantic models for formal description techniques. He then introduces one represen tative technique for each of the main description approaches, among others SDL and LOTOS, and surveys the use of UML for describing protocols. Part III covers the protocol life cycle and the most important development stages, presenting the reader with approaches for systematic protocol design, with various verification methods, with the main implementation techniques, and with strategies for their testing, in particular with conformance and interoperability tests, and the test description language TTCN. The author uses the simple data transfer example protocol XDT (eXample Data Transfer) throughout the book as a reference protocol to exemplify the various description techniques and to demonstrate important validation and implementation approaches. The book is an introduction to communication protocols and their development for undergraduate and graduate students of computer science and communication technology, and it is also a suitable reference for engineers and programmers. Most chapters contain exercises, and the author's accompanying website provides further online material including a complete formal description of the XDT protocol and an animated simulation visualizing its behavior.
* Treats LISP as a language for commercial applications, not a language for academic AI concerns. This could be considered to be a secondary text for the Lisp course that most schools teach . This would appeal to students who sat through a LISP course in college without quite getting it - so a "nostalgia" approach, as in "wow-lisp can be practical..." * Discusses the Lisp programming model and environment. Contains an introduction to the language and gives a thorough overview of all of Common Lisp's main features. * Designed for experienced programmers no matter what languages they may be coming from and written for a modern audience-programmers who are familiar with languages like Java, Python, and Perl. * Includes several examples of working code that actually does something useful like Web programming and database access.
This IMA Volume in Mathematics and its Applications ALGORITHMS FOR PARALLEL PROCESSING is based on the proceedings of a workshop that was an integral part of the 1996-97 IMA program on "MATHEMATICS IN HIGH-PERFORMANCE COMPUTING. " The workshop brought together algorithm developers from theory, combinatorics, and scientific computing. The topics ranged over models, linear algebra, sorting, randomization, and graph algorithms and their analysis. We thank Michael T. Heath of University of lllinois at Urbana (Com puter Science), Abhiram Ranade of the Indian Institute of Technology (Computer Science and Engineering), and Robert S. Schreiber of Hewlett Packard Laboratories for their excellent work in organizing the workshop and editing the proceedings. We also take this opportunity to thank the National Science Founda tion (NSF) and the Army Research Office (ARO), whose financial support made the workshop possible. A vner Friedman Robert Gulliver v PREFACE The Workshop on Algorithms for Parallel Processing was held at the IMA September 16 - 20, 1996; it was the first workshop of the IMA year dedicated to the mathematics of high performance computing. The work shop organizers were Abhiram Ranade of The Indian Institute of Tech nology, Bombay, Michael Heath of the University of Illinois, and Robert Schreiber of Hewlett Packard Laboratories. Our idea was to bring together researchers who do innovative, exciting, parallel algorithms research on a wide range of topics, and by sharing insights, problems, tools, and methods to learn something of value from one another."
This edited volume comprises invited chapters that cover five areas of the current and the future development of intelligent systems and information sciences. Half of the chapters were presented as invited talks at the Workshop "Future Directions for Intelligent Systems and Information Sciences" held in Dunedin, New Zealand, 22-23 November 1999 after the International Conference on Neuro-Information Processing (lCONIPI ANZIISI ANNES '99) held in Perth, Australia. In order to make this volume useful for researchers and academics in the broad area of information sciences I invited prominent researchers to submit materials and present their view about future paradigms, future trends and directions. Part I contains chapters on adaptive, evolving, learning systems. These are systems that learn in a life-long, on-line mode and in a changing environment. The first chapter, written by the editor, presents briefly the paradigm of Evolving Connectionist Systems (ECOS) and some of their applications. The chapter by Sung-Bae Cho presents the paradigms of artificial life and evolutionary programming in the context of several applications (mobile robots, adaptive agents of the WWW). The following three chapters written by R.Duro, J.Santos and J.A.Becerra (chapter 3), GCoghill . (chapter 4), Y.Maeda (chapter 5) introduce new techniques for building adaptive, learning robots.
Software development is a complex problem-solving activity with a high level of uncertainty. There are many technical challenges concerning scheduling, cost estimation, reliability, performance, etc, which are further aggravated by weaknesses such as changing requirements, team dynamics, and high staff turnover. Thus the management of knowledge and experience is a key means of systematic software development and process improvement. "Managing Software Engineering Knowledge" illustrates several theoretical examples of this vision and solutions applied to industrial practice. It is structured in four parts addressing the motives for knowledge management, the concepts and models used in knowledge management for software engineering, their application to software engineering, and practical guidelines for managing software engineering knowledge. This book provides a comprehensive overview of the state of the art and best practice in knowledge management applied to software engineering. While researchers and graduate students will benefit from the interdisciplinary approach leading to basic frameworks and methodologies, professional software developers and project managers will also profit from industrial experience reports and practical guidelines.
The emergence of the system-on-chip (SoC) era is creating many new challenges at all stages of the design process. Engineers are reconsidering how designs are specified, partitioned and verified. With systems and software engineers programming in C/C++ and their hardware counterparts working in hardware description languages such as VHDL and Verilog, problems arise from the use of different design languages, incompatible tools and fragmented tool flows. Momentum is building behind the SystemC language and modeling platform as the best solution for representing functionality, communication, and software and hardware implementations at various levels of abstraction. The reason is clear: increasing design complexity demands very fast executable specifications to validate system concepts, and only C/C++ delivers adequate levels of abstraction, hardware-software integration, and performance. System design today also demands a single common language and modeling foundation in order to make interoperable system--level design tools, services and intellectual property a reality. SystemC is entirely based on C/C++ and the complete source code for the SystemC reference simulator can be freely downloaded from www.systemc.org and executed on both PCs and workstations. System Design and SystemC provides a comprehensive introduction to the powerful modeling capabilities of the SystemC language, and also provides a large and valuable set of system level modeling examples and techniques. Written by experts from Cadence Design Systems, Inc. and Synopsys, Inc. who were deeply involved in the definition and implementation of the SystemC language and reference simulator, this book will provide you with thekey concepts you need to be successful with SystemC. System Design with SystemC thoroughly covers the new system level modeling capabilities available in SystemC 2.0 as well as the hardware modeling capabilities available in earlier versions of SystemC. designed and implemented the SystemC language and reference simulator, this book will provide you with the key concepts you need to be successful with SystemC. System Design with SystemC will be of interest to designers in industry working on complex system designs, as well as students and researchers within academia. All of the examples and techniques described within this book can be used with freely available compilers and debuggers &endash; no commercial software is needed. Instructions for obtaining the free source code for the examples obtained within this book are included in the first chapter.
If you're grounded in the basics of Swift, Xcode, and the Cocoa framework, this book provides a structured explanation of all essential real-world iOS app components. Through deep exploration and copious code examples, you'll learn how to create views, manipulate view controllers, and add features from iOS frameworks. Create, arrange, draw, layer, and animate views that respond to touch Use view controllers to manage multiple screens of interface Master interface classes for scroll views, table views, text, popovers, split views, web views, and controls Dive into frameworks for sound, video, maps, and sensors Access user libraries: music, photos, contacts, and calendar Explore additional topics, including files, networking, and threads Stay up-to-date on iOS 11 innovations, such as: Drag and drop Autolayout changes (including the new safe area) Stretchable navigation bars Table cell swipe buttons Dynamic type improvements Offline sound file rendering, image picker controller changes, new map annotation types, and more
ThisvolumecontainstheproceedingsofIFIPTM2009, theThirdIFIPWG11.11 International Conference on Trust Management, held at Purdue University in West Lafayette, Indiana, USA during June 15-19, 2009. IFIPTM 2009 provided a truly global platform for the reporting of research, development, policyandpracticeintheinterdependentareasofprivacy, security, and trust. Building on the traditions inherited from the highly successful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, and the IFIPTM 2008conferencein Trondheim, Norway, IFIPTM 2009 focusedontrust, privacyand security from multidisciplinary perspectives. The conferenceisanarenafor discussionaboutrelevantproblemsfromboth research and practice in the areas of academia, business, and government. IFIPTM 2009 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2009 received 44 submissions. The ProgramCommittee selected 17 - pers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include one invited paper and ?ve demo descriptions. The highlights of IFIPTM 2009 included invited talks and tutorials by academic and governmental experts in the ?elds of trust management, privacy and security, including Eugene Spa?ord, Marianne Winslett, and Michael Novak. Running an international conference requires an immense e?ort from all p- ties involved. We would like to thank the Program Committee members and external referees for having provided timely and in-depth reviews of the subm- ted papers.We wouldalsolike to thank the Workshop, Tutorial, Demonstration, Local Arrangements, and Website Chairs, for having provided great help or- nizing the con
This book describes how to apply ICONIX Process (a minimal, use case-driven modeling process) in an agile software project. It's full of practical advice for avoiding common agile pitfalls. Further, the book defines a core agile subset so those of you who want to get agile need not spend years learning to do it. Instead, you can simply read this book and apply the core subset of techniques. The book follows a real-life .NET/C# project from inception and UML modeling, to working code through several iterations. You can then go on-line to compare the finished product with the initial set of use cases. The book also introduces several extensions to the core ICONIX Process, including combining Test-Driven Development (TDD) with up-front design to maximize both approaches (with examples using Java and JUnit). And the book incorporates persona analysis to drive the projects goals and reduce requirements churn.
Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of logic programming languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in logic programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs is recommended for people implementing parallel logic programming systems, parallel symbolic systems, parallel AI systems, and parallel theorem proving systems. It will also be useful to people who wish to learn about the implementation of parallel logic programming systems. |
You may like...
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
XML in Data Management - Understanding…
Peter Aiken, M. David Allen
Paperback
R1,150
Discovery Miles 11 500
Developing Web Widget with HTML, CSS…
Lakshmi C. Chava, Rajesh Lal
Paperback
R853
Discovery Miles 8 530
|