![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
The subject of this book is the control of software engineering. The rapidly increasing demand for software is accompanied by a growth in the number of products on the market, as well as their size and complexity. Our ability to control software engineering is hardly keeping pace with this growth. As a result, software projects are often late, software products sometimes lack the required quality and the productivity improvements achieved by software engineering are insufficient to keep up with the demand This book describes ways to improve software engineering control. It argues that this should be expanded to include control of the development, maintenance and reuse of software, thus making it possible to apply many of the ideas and concepts that originate in production control and quality control. The book is based on research and experience accumulated over a number of years. During this period I had two employers: Eindhoven University of Technology and Philips Electronics. Research is not a one-man activity and I would like to thank the following persons for their contributions to the successful completion of this project. First and foremost my Ph. D. advisers Theo Bemelmans, Hans van Vliet and Fred Heemstra whose insights and experience proved invaluable at every stage. Many thanks are also due to Rob Kusters and Fred Heemstra for their patience in listening to my sometimes wild ideas and for being such excellent colleagues.
An Introduction to R and Python for Data Analysis helps teach students to code in both R and Python simultaneously. As both R and Python can be used in similar manners, it is useful and efficient to learn both at the same time, helping lecturers and students to teach and learn more, save time, whilst reinforcing the shared concepts and differences of the systems. This tandem learning is highly useful for students, helping them to become literate in both languages, and develop skills which will be handy after their studies. This book presumes no prior experience with computing, and is intended to be used by students from a variety of backgrounds. The side-by-side formatting of this book helps introductory graduate students quickly grasp the basics of R and Python, with the exercises providing helping them to teach themselves the skills they will need upon the completion of their course, as employers now ask for competency in both R and Python. Teachers and lecturers will also find this book useful in their teaching, providing a singular work to help ensure their students are well trained in both computer languages. All data for exercises can be found here: https://github.com/tbrown122387/r_and_python_book/tree/master/data. Key features: - Teaches R and Python in a "side-by-side" way. - Examples are tailored to aspiring data scientists and statisticians, not software engineers. - Designed for introductory graduate students. - Does not assume any mathematical background.
fEt moi, . . . . sifavait sucommenten rcvenir, One service mathematics has rendered the jen'yseraispointall: human race. It hasput rommon senseback JulesVerne whereit belongs, on the topmost shelf next tothedustycanisterlabelled'discardednon Theseriesis divergent; thereforewemaybe sense'. ahletodosomethingwithit. EricT. Bell O. Heaviside Mathematicsisatoolforthought. Ahighlynecessarytoolinaworldwherebothfeedbackandnon linearitiesabound. Similarly, allkindsofpartsofmathematicsserveastoolsforotherpartsandfor othersciences. Applyinga simplerewritingrule to thequoteon theright aboveonefinds suchstatementsas: 'One service topology hasrenderedmathematicalphysics . . . '; 'Oneservicelogichasrenderedcom puterscience . . . ';'Oneservicecategorytheoryhasrenderedmathematics . . . '. Allarguablytrue. And allstatementsobtainablethiswayformpartoftheraisond'etreofthisseries. This series, Mathematics and Its Applications, started in 1977. Now that over one hundred volumeshaveappeareditseemsopportunetoreexamineitsscope. AtthetimeIwrote "Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the 'tree' of knowledge of mathematics and related fields does not grow only by puttingforth new branches. It also happens, quiteoften in fact, that branches which were thought to becompletely disparatearesuddenly seento berelated. Further, thekindandlevelofsophistication of mathematics applied in various sciences has changed drastically in recent years: measure theory is used (non-trivially)in regionaland theoretical economics; algebraic geometryinteractswithphysics; theMinkowskylemma, codingtheoryandthestructure of water meet one another in packing and covering theory; quantum fields, crystal defectsand mathematicalprogrammingprofit from homotopy theory; Liealgebras are relevanttofiltering; andpredictionandelectricalengineeringcanuseSteinspaces. And in addition to this there are such new emerging subdisciplines as 'experimental mathematics', 'CFD', 'completelyintegrablesystems', 'chaos, synergeticsandlarge-scale order', whicharealmostimpossibletofitintotheexistingclassificationschemes. They drawuponwidelydifferentsectionsofmathematics. " By andlarge, all this stillapplies today. Itis still truethatatfirst sightmathematicsseemsrather fragmented and that to find, see, and exploit the deeper underlying interrelations more effort is neededandsoarebooks thatcanhelp mathematiciansand scientistsdoso. Accordingly MIA will continuetotry tomakesuchbooksavailable. If anything, the description I gave in 1977 is now an understatement."
Software product lines are emerging as a critical new paradigm for software development. Product lines are enabling organizations to achieve impressive time-to-market gains and cost reductions. With the increasing number of product lines and product-line researchers and practitioners, the time is right for a comprehensive examination of the issues surrounding the software product line approach. The Software Engineering Institute at Carnegie Mellon University is proud to sponsor the first conference on this important subject. This book comprises the proceedings of the First Software Product Line Conference (SPLC1), held August 28-31, 2000, in Denver, Colorado, USA. The twenty-seven papers of the conference technical program present research results and experience reports that cover all aspects of software product lines. Topics include business issues, enabling technologies, organizational issues, and life-cycle issues. Emphasis is placed on experiences in the development and fielding of product lines of complex systems, especially those that expose problems in the design, development, or evolution of software product lines. The book will be essential reading for researchers and practitioners alike.
Web-based Support Systems (WSS) are an emerging multidisciplinary research area in which one studies the support of human activities with the Web as the common platform, mediumandinterface.TheInternetaffectseveryaspectofourmodernlife. Moving support systems to online is an increasing trend in many research domains. One of the goals of WSS research is to extend the human physical limitation of information processing in the information age. Research on WSS is motivated by the challenges and opportunities arising from the Internet. The availability, accessibility and ?exibility of information as well as the tools to access this information lead to a vast amount of opportunities. H- ever, there are also many challenges we face. For instance, we have to deal with more complex tasks, as there are increasing demands for quality and productivity. WSS research is a natural evolution of the studies on various computerized support systems such as Decision Support Systems (DSS), Computer Aided Design (CAD), and Computer Aided Software Engineering (CASE). The recent advancement of computer and Web technologies make the implementation of more feasible WSS. Nowadays, it is rare to see a system without some type of Web interaction. The research of WSS is classi?ed into four groups. WSS for speci?c domains."
Praise for the previous edition: 'Gives an excellent insight into the main issues of creating a website and offers a good foundation of knowledge.' ? i.net Producing for Web 2.0 is a clear and practical guide to the planning, set up and management of a website in web 2.0. It gives readers an overview of the current technologies available for online communications and shows how to use them for maximum effect when planning a website. Producing for Web 2.0 sets out the practical toolkit needed for web design and content management. It is supported by a regularly updated and comprehensive Companion Website at: www.producingforweb2.com where readers can see examples of programming and demonstrations of concepts discussed in the book, as well as trying things out themselves. Producing for Web 2.0 includes:
Many times, web services standards do not explicitly address core issues specific to the financial industrywhich makes it difficult to implement standards-compliant systems. But "Web Services in Finance "will bridge the gap in standards awareness. And you will acquire the skills to develop secure applications quickly. If you are a .NET or J2EE developer working in the financial industry, currently migrating applications to become Web services, or writing new Web services, then this book is your ideal companion! The authors thoroughly discuss crucial topics like data representation, messaging, security, privacy, management, monitoring, and more. What's more: the provided examples and API reviews will help you swiftly reach your goals. Table of Contents Introduction to Web Services Enterprise Systems Data Representation Messaging Description and Data Format Discovery and Advertising Alternative Transports Security Quality of Service Conversations, Workflows, and Transactions
The advent of the computer age has set in motion a profound shift in our perception of science -its structure, its aims and its evolution. Traditionally, the principal domains of science were, and are, considered to be mathe matics, physics, chemistry, biology, astronomy and related disciplines. But today, and to an increasing extent, scientific progress is being driven by a quest for machine intelligence - for systems which possess a high MIQ (Machine IQ) and can perform a wide variety of physical and mental tasks with minimal human intervention. The role model for intelligent systems is the human mind. The influ ence of the human mind as a role model is clearly visible in the methodolo gies which have emerged, mainly during the past two decades, for the con ception, design and utilization of intelligent systems. At the center of these methodologies are fuzzy logic (FL); neurocomputing (NC); evolutionary computing (EC); probabilistic computing (PC); chaotic computing (CC); and machine learning (ML). Collectively, these methodologies constitute what is called soft computing (SC). In this perspective, soft computing is basically a coalition of methodologies which collectively provide a body of concepts and techniques for automation of reasoning and decision-making in an environment of imprecision, uncertainty and partial truth."
This book presents a collection of contributions from related logics to applied paraconsistency. Moreover, all of them are dedicated to Jair Minoro Abe,on the occasion of his sixtieth birthday. He is one of the experts in Paraconsistent Engineering, who developed the so-called annotated logics. The book includes important contributions on foundations and applications of paraconsistent logics in connection with engineering, mathematical logic, philosophical logic, computer science, physics, economics, and biology. It will be of interest to students and researchers, who are working on engineering and logic.
Scientific applications involve very large computations that strain the resources of whatever computers are available. Such computations implement sophisticated mathematics, require deep scientific knowledge, depend on subtle interplay of different approximations, and may be subject to instabilities and sensitivity to external input. Software able to succeed in this domain invariably embeds significant domain knowledge that should be tapped for future use. Unfortunately, most existing scientific software is designed in an ad hoc way, resulting in monolithic codes understood by only a few developers. Software architecture refers to the way software is structured to promote objectives such as reusability, maintainability, extensibility, and feasibility of independent implementation. Such issues have become increasingly important in the scientific domain, as software gets larger and more complex, constructed by teams of people, and evolved over decades. In the context of scientific computation, the challenge facing mathematical software practitioners is to design, develop, and supply computational components which deliver these objectives when embedded in end-user application codes. The Architecture of Scientific Software addresses emerging methodologies and tools for the rational design of scientific software, including component integration frameworks, network-based computing, formal methods of abstraction, application programmer interface design, and the role of object-oriented languages. This book comprises the proceedings of the International Federation for Information Processing (IFIP) Conference on the Architecture of Scientific Software, which was held in Ottawa, Canada, in October 2000. It will prove invaluable reading for developers of scientific software, as well as for researchers in computational sciences and engineering.
E-Government Website Development: Future Trends and Strategic Models focuses on three foundational aspects of e-government Web sites, namely concepts or theories that influence e-government Web site development, description and analysis of e-government Web site experience from different national perspectives, and possible models that might provide direction for future e-government development. The authors brilliantly incorporate a combination of basic concepts that will guide future development of governmental Web sites, descriptive research about the state of e-government in various parts of the world, and a specific prescription for the future of e-government Web sites into one essential compilation.
The SGML FAQ Book: Understanding the Foundation of HTML and XML is similar, but not quite the same kind of thing as an online FAQ or Frequently Asked Questions' list. It addresses questions from people who already actually use SGML in some way (including HTML authors), and people who are about to use it. It deals mainly with issues that arise when using SGML in practice. A very brief introduction to SGML is included as Appendix A. The questions discussed in The SGML FAQ Book are repeatedly heard by people who make their living serving the SGML community. SGML experts spend many hours teaching these details, sometimes repeatedly because some questions do not seem important - until you run into them. So one benefit of this book is learning more of the art of document creation and management, both by general reading before questions arise and by specific reference when a question arises. For the latter use, the appendices, glossary, and index are particularly important. A second benefit of this book is that it provides a common theme to its answers that you can apply in your use of SGML, HTML and related languages in general. The fundamental answer to many of the questions boils down to simplify': many questions do not show up if you use the simple, elegant core of SGML without worrying about optional features. The credo of this book is simply, SGML doesn't need to be complicated'. SGML has the potential for complexity at certain points. But much of the complexity comes from optional parts and can be avoided. SGML methodology and its primary benefits suffer no loss even if you skip many features, which speaks well for the quality of SGML's overall design. Many of the questions discussedinvolve those optional parts, and therefore can be avoided by judicious designers and authors. The two key goals of the book are (1) to answer questions that you may actually encounter as an SGML user, and to help you get unstuck' and be as productive as possible in using the language and (2) to show proactive ways you can simplify your use of SGML, and get its very substantial benefits with minimal complexity.
Optimum envelope-constrained filter design is concerned with time-domain synthesis of a filter such that its response to a specific input signal stays within prescribed upper and lower bounds, while minimizing the impact of input noise on the filter output or the impact of the shaped signal on other systems depending on the application. In many practical applications, such as in TV channel equalization, digital transmission, and pulse compression applied to radar, sonar and detection, the soft least square approach, which attempts to match the output waveform with a specific desired pulse, is not the most suitable one. Instead, it becomes necessary to ensure that the response stays within the hard envelope constraints defined by a set of continuous inequality constraints. The main advantage of using the hard envelope-constrained filter formulation is that it admits a whole set of allowable outputs. From this set one can then choose the one which results in the minimization of a cost function appropriate to the application at hand. The signal shaping problems so formulated are semi-infinite optimization problems. This monograph presents in a unified manner results that have been generated over the past several years and are scattered in the research literature. The material covered in the monograph includes problem formulation, numerical optimization algorithms, filter robustness issues and practical examples of the application of envelope constrained filter design. Audience: Postgraduate students, researchers in optimization and telecommunications engineering, and applied mathematicians.
Fuzzy rule systems have found a wide range of applications in many fields of science and technology. Traditionally, fuzzy rules are generated from human expert knowledge or human heuristics for relatively simple systems. In the last few years, data-driven fuzzy rule generation has been very active. Compared to heuristic fuzzy rules, fuzzy rules generated from data are able to extract more profound knowledge for more complex systems. This book presents a number of approaches to the generation of fuzzy rules from data, ranging from the direct fuzzy inference based to neural net works and evolutionary algorithms based fuzzy rule generation. Besides the approximation accuracy, special attention has been paid to the interpretabil ity of the extracted fuzzy rules. In other words, the fuzzy rules generated from data are supposed to be as comprehensible to human beings as those generated from human heuristics. To this end, many aspects of interpretabil ity of fuzzy systems have been discussed, which must be taken into account in the data-driven fuzzy rule generation. In this way, fuzzy rules generated from data are intelligible to human users and therefore, knowledge about unknown systems can be extracted."
With the increasing proliferation of information-technology and, especially, Web-based approaches to the implementation of systems and services, researchers, educators, and practitioners worldwide are experiencing a rising need for authoritative references to enhance their understanding of the most current and effective engineering practices leading to robust and successful solutions.""Integrated Approaches in Information Technology and Web Engineering: Advancing Organizational Knowledge Sharing"" presents comprehensive, research-driven insights into the field of Web engineering. This book collects over 30 authoritative articles from distinguished international researchers in information technology and Web engineering, creating an invaluable resource for library reference collections that will equip researchers and practitioners in academia and industry alike with the knowledge base to drive the next generation of innovations.
This book presents a comprehensive, structured, up-to-date survey on instruction selection. The survey is structured according to two dimensions: approaches to instruction selection from the past 45 years are organized and discussed according to their fundamental principles, and according to the characteristics of the supported machine instructions. The fundamental principles are macro expansion, tree covering, DAG covering, and graph covering. The machine instruction characteristics introduced are single-output, multi-output, disjoint-output, inter-block, and interdependent machine instructions. The survey also examines problems that have yet to be addressed by existing approaches. The book is suitable for advanced undergraduate students in computer science, graduate students, practitioners, and researchers.
A Software Process Model Handbook for Incorporating People's Capabilities offers the most advanced approach to date, empirically validated at software development organizations. This handbook adds a valuable contribution to the much-needed literature on people-related aspects in software engineering. The primary focus is on the particular challenge of extending software process definitions to more explicitly address people-related considerations. The capability concept is not present nor has it been considered in most software process models. The authors have developed a capabilities-oriented software process model, which has been formalized in UML and implemented as a tool. A Software Process Model Handbook for Incorporating People's Capabilities guides readers through the incorporation of the individuala (TM)s capabilities into the software process. Structured to meet the needs of research scientists and graduate-level students in computer science and engineering, this book is also suitable for practitioners in industry.
There are several approaches to attack hard problems. All have their merits, but also their limitations, and need a large body of theory as their basis. A number of books for each one exist: books on complexity theory, others on approximation algorithms, heuristic approaches, parametrized complexity, and yet others on randomized algorithms. This book discusses thoroughly all of the above approaches. And, amazingly, at the same time, does this in a style that makes the book accessible not only to theoreticians, but also to the non-specialist, to the student or teacher, and to the programmer. Do you think that mathematical rigor and accessibility contradict? Look at this book to find out that they do not, due to the admirable talent of the author to present his material in a clear and concise way, with the idea behind the approach spelled out explicitly, often with a revealing example.Reading this book is a beautiful experience and I can highly recommend it to anyone interested in learning how to solve hard problems. It is not just a condensed union of material from other books. Because it discusses the different approaches in depth, it has the chance to compare them in detail, and, most importantly, to highlight under what circumstances which approach might be worth exploring. No book on a single type of solution can do that, but this book does it in an absolutely fascinating way that can serve as a pattern for theory textbooks with a high level of generality. (Peter Widmayer)The second edition extends the part on the method of relaxation to linear programming with an emphasis on rounding, LP-duality, and primal-dual schema, and provides a self-contained and transparent presentation of the design of randomized algorithms for primality testing.
Here, one of the leading figures in the field provides a comprehensive survey of the subject, beginning with prepositional logic and concluding with concurrent programming. It is based on graduate courses taught at Cornell University and is designed for use as a graduate text. Professor Schneier emphasises the use of formal methods and assertional reasoning using notation and paradigms drawn from programming to drive the exposition, while exercises at the end of each chapter extend and illustrate the main themes covered. As a result, all those interested in studying concurrent computing will find this an invaluable approach to the subject.
This book presents a coherent description of the theoretical and practical aspects of Coloured Petri Nets (CP-nets or CPN). It shows how CP-nets have been de veloped - from being a promising theoretical model to being a full-fledged lan guage for the design, specification, simulation, validation and implementation of large software systems (and other systems in which human beings and/or com puters communicate by means of some more or less formal rules). The book contains the formal definition of CP-nets and the mathematical theory behind their analysis methods. However, it has been the intention to write the book in such a way that it also becomes attractive to readers who are more interested in applications than the underlying mathematics. This means that a large part of the book is written in a style which is closer to an engineering textbook (or a users' manual) than it is to a typical textbook in theoretical computer science. The book consists of three separate volumes. The first volume defines the net model (i. e. , hierarchical CP-nets) and the basic concepts (e. g. , the different behavioural properties such as deadlocks, fair ness and home markings). It gives a detailed presentation of many small exam ples and a brief overview of some industrial applications. It introduces the for mal analysis methods. Finally, it contains a description of a set of CPN tools which support the practical use of CP-nets.
This book draws new attention to domain-specific conceptual modeling by presenting the work of thought leaders who have designed and deployed specific modeling methods. It provides hands-on guidance on how to build models in a particular domain, such as requirements engineering, business process modeling or enterprise architecture. In addition to these results, it also puts forward ideas for future developments. All this is enriched with exercises, case studies, detailed references and further related information. All domain-specific methods described in this volume also have a tool implementation within the OMiLAB Collaborative Environment - a dedicated research and experimentation space for modeling method engineering at the University of Vienna, Austria - making these advances accessible to a wider community of further developers and users. The collection of works presented here will benefit experts and practitioners from academia and industry alike, including members of the conceptual modeling community as well as lecturers and students.
The proceedings represent the state of knowledge in the area of algorithmic differentiation (AD). The 31 contributed papers presented at the AD2012 conference cover the application of AD to many areas in science and engineering as well as aspects of AD theory and its implementation in tools. For all papers the referees, selected from the program committee and the greater community, as well as the editors have emphasized accessibility of the presented ideas also to non-AD experts. In the AD tools arena new implementations are introduced covering, for example, Java and graphical modeling environments or join the set of existing tools for Fortran. New developments in AD algorithms target the efficiency of matrix-operation derivatives, detection and exploitation of sparsity, partial separability, the treatment of nonsmooth functions, and other high-level mathematical aspects of the numerical computations to be differentiated. Applications stem from the Earth sciences, nuclear engineering, fluid dynamics, and chemistry, to name just a few. In many cases the applications in a given area of science or engineering share characteristics that require specific approaches to enable AD capabilities or provide an opportunity for efficiency gains in the derivative computation. The description of these characteristics and of the techniques for successfully using AD should make the proceedings a valuable source of information for users of AD tools.
This textbook presents a survey of research on boolean functions, circuits, parallel computation models, function algebras, and proof systems. Its main aim is to elucidate the structure of "fast" parallel computation. The complexity of parallel computation is emphasized through a variety of techniques ranging from finite combinatorics, probability theory and finite group theory to finite model theory and proof theory. Nonuniform computation models are studied in the form of boolean circuits; uniform ones in a variety of forms. Steps in the investigation of non-deterministic polynomial time are surveyed as is the complexity of various proof systems. The book will benefit advanced undergraduates and graduate students as well as researchers in the field of complexity theory.
In this book, the author considers separable programming and, in particular, one of its important cases - convex separable programming. Some general results are presented, techniques of approximating the separable problem by linear programming and dynamic programming are considered. Convex separable programs subject to inequality/ equality constraint(s) and bounds on variables are also studied and iterative algorithms of polynomial complexity are proposed. As an application, these algorithms are used in the implementation of stochastic quasigradient methods to some separable stochastic programs. Numerical approximation with respect to I1 and I4 norms, as a convex separable nonsmooth unconstrained minimization problem, is considered as well. Audience: Advanced undergraduate and graduate students, mathematical programming/ operations research specialists. |
You may like...
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,332
Discovery Miles 13 320
Research Anthology on Agile Software…
Information R Management Association
Hardcover
R14,547
Discovery Miles 145 470
Introducing Delphi Programming - Theory…
John Barrow, Linda Miller, …
Paperback
(1)R751 Discovery Miles 7 510
|