![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Business & management > Business mathematics & systems
Systems for Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) are currently separate. The potential of the latest technologies and changes in operational and analytical applications over the last decade have given rise to the unification of these systems, which can be of benefit for both workloads. Research and industry have reacted and prototypes of hybrid database systems are now appearing. Benchmarks are the standard method for evaluating, comparing and supporting the development of new database systems. Because of the separation of OLTP and OLAP systems, existing benchmarks are only focused on one or the other. With the rise of hybrid database systems, benchmarks to assess these systems will be needed as well. Based on the examination of existing benchmarks, a new benchmark for hybrid database systems is introduced in this book. It is furthermore used to determine the effect of adding OLAP to an OLTP workload and is applied to analyze the impact of typically used optimizations in the historically separate OLTP and OLAP domains in mixed-workload scenarios.
Three dominant forces worldwide are driving change today in our financial markets: competition, technology and regulation. But their collective impact in reshaping the markets, though they may be viewed individually as desirable or well-intentioned, is producing challenging results that are difficult to predict, hard to control and not easy to understand. Extreme market turbulence has underlined the key issues as much attention turns to the appropriate regulatory response. That is the backdrop for this thought-provoking book, emerging from a Baruch College Conference on equity market structure in the aftermath of the global financial crisis, and featuring contributions from an acclaimed panel of international scholars, policymakers, regulators, and industry leaders. The result presents emerging perspective and ideas that illuminate the dynamics of financial regulation today and into the future. The Zicklin School of Business Financial Markets Series presents the insights emerging from a sequence of conferences hosted by the Zicklin School at Baruch College for industry professionals, regulators, and scholars. Much more than historical documents, the transcripts from the conferences are edited for clarity, perspective and context; material and comments from subsequent interviews with the panelists and speakers are integrated for a complete thematic presentation. Each book is focused on a well delineated topic, but all deliver broader insights into the quality and efficiency of the U.S. equity markets and the dynamic forces changing them.
Forecasting is a crucial function for companies in the fashion industry, but for many real-life forecasting applications in the, the data patterns are notorious for being highly volatile and it is very difficult, if not impossible, to analytically learn about the underlying patterns. As a result, many traditional methods (such as pure statistical models) will fail to make a sound prediction. Over the past decade, advances in artificial intelligence and computing technologies have provided an alternative way of generating precise and accurate forecasting results for fashion businesses. Despite being an important and timely topic, there is currently an absence of a comprehensive reference source that provides up-to-date theoretical and applied research findings on the subject of intelligent fashion forecasting systems. This three-part handbook fulfills this need and covers materials ranging from introductory studies and technical reviews, theoretical modeling research, to intelligent fashion forecasting applications and analysis. This book is suitable for academic researchers, graduate students, senior undergraduate students and practitioners who are interested in the latest research on fashion forecasting.
A groundbreaking, flexible approach to computer science anddata science The Deitels' Introduction to Python for ComputerScience and Data Science: Learning to Program with AI, Big Data and the Cloudoffers a unique approach to teaching introductory Python programming,appropriate for both computer-science and data-science audiences. Providing themost current coverage of topics and applications, the book is paired withextensive traditional supplements as well as Jupyter Notebooks supplements.Real-world datasets and artificial-intelligence technologies allow students towork on projects making a difference in business, industry, government andacademia. Hundreds of examples, exercises, projects (EEPs) and implementationcase studies give students an engaging, challenging and entertainingintroduction to Python programming and hands-on data science. The book's modular architecture enables instructors toconveniently adapt the text to a wide range of computer-science anddata-science courses offered to audiences drawn from many majors.Computer-science instructors can integrate as much or as little data-scienceand artificial-intelligence topics as they'd like, and data-science instructorscan integrate as much or as little Python as they'd like. The book aligns withthe latest ACM/IEEE CS-and-related computing curriculum initiatives and withthe Data Science Undergraduate Curriculum Proposal sponsored by the NationalScience Foundation.
'CRM Systems in Industrial Companies' contributes new knowledge on customer relationship management (CRM) in the field of industrial marketing. Based on an in-depth case study, this book highlights the complexity and challenges in the development, implementation and use of CRM. The volume proposes an alternative conceptualization of CRM: relying on the industrial marketing and purchasing (IMP) perspective, CRM becomes a socio-technical 'resource' which needs to be connected to the other resources before it can create effects on customer relationships.
This book presents a systematic literature review of 156 published papers on business model innovation (BMI). The aim is to identify and integrate the different theoretical perspectives, analytical levels, and empirical contexts in order to deepen understanding of this complex phenomenon. The authors conduct an inductive thematic analysis based on an informal ontological classification that identifies 56 key themes. Within each theme, discussion focuses on thematic patterns, potential inconsistencies and debates, and future directions and opportunities for research. The book makes a number of significant contributions to the field. First, it offers a deeper understanding of the evolution of research on BMI through an ontological map that identifies the key thematic areas in the literature. Second, a multilevel model is developed that clarifies the concept of BMI by identifying its drivers, contingencies, and outcomes. Third, the authors identify clear and specific directions for further research and offer suggestions on research design, creating an informative road map for the future. The book will be of value both to scholars and researchers and to practitioners.
This book describes analytical techniques for optimizing knowledge acquisition, processing, and propagation, especially in the contexts of cyber-infrastructure and big data. Further, it presents easy-to-use analytical models of knowledge-related processes and their applications. The need for such methods stems from the fact that, when we have to decide where to place sensors, or which algorithm to use for processing the data-we mostly rely on experts' opinions. As a result, the selected knowledge-related methods are often far from ideal. To make better selections, it is necessary to first create easy-to-use models of knowledge-related processes. This is especially important for big data, where traditional numerical methods are unsuitable. The book offers a valuable guide for everyone interested in big data applications: students looking for an overview of related analytical techniques, practitioners interested in applying optimization techniques, and researchers seeking to improve and expand on these techniques.
This book brings together recent qualitative research studies in enterprise-wide implementations. This collection is useful as a teaching case for academia, a student reference and also for academics, researchers and IT practitioners who wish to gain a broad view of ERP implementation success and failure, This book provides relevant methodologies and recent empirical research findings in the area and includes sufficient background information for an understanding of each case but focuses on providing a rich description of more than a dozen real life cases.
Universities are increasingly being asked to play a greater role in their communities. With the growth of the technology industry and the increasing importance of the Internet in education and everyday life, academic IT departments are beginning to form partnerships with both non-profit and for-profit organizations in the local community. These partnerships can relate to the whole curriculum, to specific classes, to students internships, to theoretical research, and to industrial research, and there are many other possibilities for IT/Community partnerships. Managing IT/Community Partnerships in the 21st Century explores the various possibilities for partnerships between academic IT departments and community-based organizations.
Probabilistic Methods for Financial and Marketing Informatics aims to provide students with insights and a guide explaining how to apply probabilistic reasoning to business problems. Rather than dwelling on rigor, algorithms, and proofs of theorems, the authors concentrate on showing examples and using the software package Netica to represent and solve problems. The book contains unique coverage of probabilistic reasoning topics applied to business problems, including marketing, banking, operations management, and finance. It shares insights about when and why probabilistic methods can and cannot be used effectively. This book is recommended for all R&D professionals and students who are involved with industrial informatics, that is, applying the methodologies of computer science and engineering to business or industry information. This includes computer science and other professionals in the data management and data mining field whose interests are business and marketing information in general, and who want to apply AI and probabilistic methods to their problems in order to better predict how well a product or service will do in a particular market, for instance. Typical fields where this technology is used are in advertising, venture capital decision making, operational risk measurement in any industry, credit scoring, and investment science.
This volume collects a selection of refereed papers of the more than one hundred presented at the InternationalConference MAF 2008 - Mathematicaland Statistical Methods for Actuarial Sciences and Finance. The conference was organised by the Department of Applied Mathematics and theDepartment ofStatisticsoftheUniversityCa'Foscari Venice(Italy), withthec- laborationofthe Department ofEconomics and StatisticalSciences ofthe University ofSalerno(Italy).Itwas heldinVenice, fromMarch 26to28,2008, attheprestigious CavalliFranchettipalace, alongGrand Canal, oftheIstitutoVenetodiScienze, Lettere ed Arti. This conference was the ?rst international edition of a biennial national series begunin2004, whichwas bornof thebrilliantbeliefofthe colleagues -and friends- oftheDepartmentofEconomicsandStatisticalSciences oftheUniversityofSalerno: the idea following which the cooperation between mathematicians and statisticians in working in actuarial sciences, in insurance and in ?nance can improve research on these topics. The proof of this consists in the wide participation in these events. In particular, with reference to the 2008 internationaledition: - More than 150 attendants, both academicians and practitioners; - More than 100 accepted communications, organised in 26 parallel sessions, from authors coming from about twenty countries (namely: Canada, Colombia, Czech Republic, France, Germany, Great Britain, Greece, Hungary, Ireland, Israel, Italy, Japan, Poland, Spain, Sweden, Switzerland, Taiwan, USA); - two plenary guest-organised sessions; and - aprestigiouskeynotelecturedeliveredbyProfessorWolfgangHa ]rdleoftheH- boldt Universityof Berlin (Germany)
This book explores how agile development practices, in particular pair programming, code review and automated testing, help software development teams to perform better. Agile software engineering has become the standard software development paradigm over the last decade, and the insights provided here are taken from a large-scale survey of 80 professional software development teams working at SAP SE in Germany. In addition, the book introduces a novel measurement tool for assessing the performance of software development teams. No previous study has researched this topic with a similar data set comprising insights from more than 450 professional software engineers.
The resource transfer problem (RTP) is a modeling and solution framework for integrated complex scheduling and rich vehicle routing problems. It allows the modeling of a wide variety of scheduling problems, vehicle routing problems, their combination with integrated problems, as well as various specific requirements and restrictions arising in practical scheduling and vehicle routing. Based on the unifying resource transfer problem framework, this book proposes a generic constraint propagation approach that exploits the specific structure of scheduling and routing problems.
Environments needing information technology and management skills combined often find that strategic information and intelligence is not always readily available. How to scan management environments for relevant information and then make sense of the information remains a challenge.Managing Strategic Intelligence: Techniques and Technologies builds a network of excellence in effectively managing strategic information for senior management. It focuses on environment information scanning and organization-wide support for strategic intelligence. Managing Strategic Intelligence: Techniques and Technologies prompts further development for theories and best practices in strategic intelligence, and provides future direction for innovative systems by using intelligent agents. This book also provides practical guidance to organizations on developing effective approaches, mechanisms, and systems to scan, refine, and support strategic information provision.
This book is a first. It fills a major gap in the market and provides a wide snapshot of intelligent technologies for inconsistency resolution. The need for this resolution of knowledge inconsistency arises in many practical applications of computer systems. This kind of inconsistency results from the use of various resources of knowledge in realizing practical tasks. These resources are often autonomous and use different mechanisms for processing knowledge about the same real world. This can lead to compatibility problems.
In recent years, with rapidly advancing technology and a more globalized culture, the importance of Information Systems has become paramount. The application of Information Systems has made a huge impact on the service sector, both public and private. Information Systems and New Applications in the Service Sector: Models and Methods examines current, state-of-the-art research in the area of service sectors and their interactions, linkages, applications, and support using information systems. This publication encompasses theoretical, analytical, and empirical research, as well as comprehensive reviews of relevant research, technical reports, and case studies of effective applications in this area. The use of new theories, technologies, models, methods, techniques, and principles are emphasized all while explaining the relationship between the advancement of the service sector and the evolution of information systems.
An important aspect of managing human capital in the 21st century workplace is managing the interface between humans and information technology, particularly the World Wide Web. The Web has changed not only how and where business is conducted, but also how and where work is done. Personal web usage has created many desirable organizational outcomes such as, reducing the cost of communication, restructuring how work is performed. However, it has also generated undesirable outcomes, for instance, loss of intellectual property, sexual harassment lawsuits, productivity losses due to surfing usage, security threats, and network bandwidth overload by visiting web sites for travel, leisure, and sports, and news. The mechanisms controlling the interface of individual and institution in this flexible, open, autonomous work environment created by the Web are emergent phenomena, and the lines between legitimate usage and usage are just beginning to be understood. Personal Web Usage in the Workplace: A Guide to Effective Human Resources Management examines topics which embrace a wide array of Personal Web Usage issues such as antecedents of Web usage, frameworks/models of Web usage, Web technologies for monitoring usage, Web usage within other cultures and countries, Measurement issues of Web usage, and the impact of Web usage among others.
Business process reengineering (BPR) focuses on redesigning the strategic and value-added processes which transcend the organizational boundaries. It is a cross-functional approach that requires support from almost all the departments of the organization. Business Process Reengineering: Automation Decision Points in Process Reengineering offers a new framework based process reengineering and links it to organization life cycle, process life cycle, and process management. This volume describes the fundamental concepts behind business process reengineering and examines them through case studies, and should appeal to researchers and academics interested in business process reengineering, operations strategy, and organizational restructuring and design.
This book presents source code modularization as a key activity in reverse engineering to extract the software architecture from the existing source code. To this end, it provides detailed techniques for source code modularization and discusses their effects on different software quality attributes. Nonetheless, it is not a mere survey of source code modularization algorithms, but rather a consistent and unifying theoretical modularization framework, and as such is the first publication that comprehensively examines the models and techniques for source code modularization. It enables readers to gain a thorough understanding of topics like software artifacts proximity, hierarchical and partitional modularization algorithms, search- and algebraic-based software modularization, software modularization evaluation techniques and software quality attributes and modularization. This book introduces students and software professionals to the fundamental ideas of source code modularization concepts, similarity/dissimilarity metrics, modularization metrics, and quality assurance. Further, it allows undergraduate and graduate students in software engineering, computer science, and computer engineering with no prior experience in the software industry to explore the subject in a step-by-step manner. Practitioners benefit from the structured presentation and comprehensive nature of the materials, while the large number of bibliographic references makes this book a valuable resource for researchers working on source code modularization.
During the 21st century business environments have become more complex and dynamic than ever before. Companies operate in a world of change influenced by globalisation, volatile markets, legal changes and technical progress. As a result, they have to handle growing volumes of data and therefore require fast storage, reliable data access, intelligent retrieval of information and automated decision-making mechanisms, all provided at the highest level of service quality. Successful enterprises are aware of these challenges and efficiently respond to the dynamic environment in which their business operates. Business Intelligence (BI) and Performance Management (PM) offer solutions to these challenges and provide techniques to enable effective business change. The important aspects of both topics are discussed within this state-of-the-art volume. It covers the strategic support, business applications, methodologies and technologies from the field, and explores the benefits, issues and challenges of each. Issues are analysed from many different perspectives, ranging from strategic management to data technologies, and the different subjects are complimented and illustrated by numerous examples of industrial applications. Contributions are authored by leading academics and practitioners representing various universities, research centres and companies worldwide. Their experience covers multiple disciplines and industries, including finance, construction, logistics, and public services, amongst others. Business Intelligence and Performance Management is a valuable source of reference for graduates approaching MSc or PhD programs and for professionals in industry researching in the fields of BI and PM for industrial application.
This book provides user studies and theories related to user-centered technology design processes for e-government projects. The book mainly discusses inherent issues of technology design implications, user experiences, and guidelines for technology appropriation. Ethnographic studies focusing on real life examples will enable readers to understand the problems in an effective way. Furthermore, the theories and results will help researchers and practitioners to handle these challenges in an efficient way. E-Government is about harnessing the information revolution to improve the efficiency of government processes and the lives of citizens. It aims at a citizen centered approach to governance through effective use of the Internet and Information and Communication Technologies (ICTs). E-Government promotes transparency and effectiveness of a government's processes as well as citizens' participation (e-participation) in the affairs of the government. Whereas E-government projects are huge undertakings for government departments, a user-centric approach requires citizens' participation in the design and delivery of e government services. In both these respects, there are huge challenges and governments require long term commitment as well as correct planning and availability of financial resources to address them. System design for e-governmental applications is inherently a complex process. In successful e-government projects, appropriately designed technology infrastructure plays a pivotal rule. The technology appropriation process requires that e-government technologies should be in line with the work practices of end users, so that successful usage of these technologies can be realized. E-governmental systems which fail to take into account such human factors result in failure and wasting huge amounts of public money as well as a loss of confidence of the public in such technological infrastructures. It is highly important that citizens are enabled to have access to the appropriate information technology, have knowledge and skills to use the available technology, and have the positive commitment to affect the governments' strategies. So, enabling citizens to effectively participate is much more difficult. This book addresses these inherent challenges and available opportunities with respect to user-centric e-government.
Speed as a factor for success Our modern industrial society lives life in the fast lane. The catchwords "faster," "shorter," "more powerful" reflect what we experience in almost all aspects of our lives. Whether at home or at work, we are constantly on the move and in a rush. In our private lives we find rapid exchange of inf- mation most entertaining and we are fascinated by the wide range of inf- mation that pours in on us from all around the world, mainly via the new media. It gives us the feeling of being a part of the action everywhere and all the time. Seldom are we aware that the only reason this flood of inf- mation, often referred to as "overstimulation," does not lead to overkill is that we manage to organize our time effectively. There are many parallels to this in the business world. Here too, a great deal of time pressure is exerted from outside; goals are set ever higher and deadlines become tighter. In other words, demands on our time demand faster reaction. Crucial information travels around the globe - across all time zones - in a matter of seconds. In fact, instead of CET or CEST, it would make sense to have a single time zone for the worldwide network called GST for Global Simultaneous Time. In business more so than in p- vate life, we are almost constantly online.
Advances in social science research methodologies and data analytic methods are changing the way research in information systems is conducted. New developments in statistical software technologies for data mining (DM) such as regression splines or decision tree induction can be used to assist researchers in systematic post-positivist theory testing and development. Established management science techniques like data envelopment analysis (DEA), and value focused thinking (VFT) can be used in combination with traditional statistical analysis and data mining techniques to more effectively explore behavioral questions in information systems research. As adoption and use of these research methods expand, there is growing need for a resource book to assist doctoral students and advanced researchers in understanding their potential to contribute to a broad range of research problems. "Advances in Research Methods for Information Systems Research: Data Mining, Data Envelopment Analysis, Value Focused Thinking" focuses on bridging and unifying these three different methodologies in order to bring them together in a unified volume for the information systems community. This book serves as a resource that provides overviews on each method, as well as applications on how they can be employed to address IS research problems. Its goal is to help researchers in their continuous efforts to set the pace for having an appropriate interplay between behavioral research and design science.
This is a practical guide to solutions for forecasting demand for services and products in international markets - and much more than just a listing of dry theoretical methods. Leading experts present studies on improving methods for forecasting numbers of incoming patent filings at the European Patent Office. These are reviewed by practitioners of the existing methods, revealing that it may not always be wise to trust established regression approaches. |
You may like...
Fat Chance - Probability from 0 to 1
Benedict Gross, Joe Harris, …
Hardcover
R1,923
Discovery Miles 19 230
Financial Mathematics - A Computational…
K. Pereira, N. Modhien, …
Paperback
R326
Discovery Miles 3 260
|