![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
This book presents a coherent, novel vision of Smart Cities, built around a value-driven architecture. It describes the limitations of the contemporary notion of the Smart City and argues that the next developmental step must actively include not only the physical infrastructure, but information technology and human infrastructure as well, requiring the intensive integration of technical solutions from the Internet of Things (IoT) and social computing. The book is divided into five major parts, the first of which provides both a general introduction and a coherent vision that ties together all the components that are required to realize the vision for Smart Cities. Part II then discusses the provisioning and governance of Smart City systems and infrastructures. In turn, Part III addresses the core technologies and technological enablers for managing the social component of the Smart City platform. Both parts combine state-of-the-art research with cutting-edge industrial efforts in the respective fields. Lastly, Part IV details a road map to achieving Cyber-Human Smart Cities. Rounding out the coverage, it discusses the concrete technological advances needed to move beyond contemporary Smart Cities and toward the Smart Cities of the future. Overall, the book provides an essential overview of the latest developments in the areas of IoT and social computing research, and outlines a research roadmap for a closer integration of the two areas in the context of the Smart City. As such, it offers a valuable resource for researchers and graduate students alike.
Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."
Organizing websites is highly dynamic and often chaotic. Thus, it is crucial that host web servers manipulate URLs in order to cope with temporarily or permanently relocated resources, prevent attacks by automated worms, and control resource access. The Apache mod_rewrite module has long inspired fits of joy because it offers an unparalleled toolset for manipulating URLs. "The Definitive Guide to Apache mod_rewrite" guides you through configuration and use of the module for a variety of purposes, including basic and conditional rewrites, access control, virtual host maintenance, and proxies. This book was authored by Rich Bowen, noted Apache expert and Apache Software Foundation member, and draws on his years of experience administering, and regular speaking and writing about, the Apache server.
This book tackles the recent research directions in using the newly emerged technologies during the era of COVID-19 pandemic. It mainly focuses on using emerging technologies and their impact on health care, education, and society. It also provides insights into the current challenges and constraints in using technologies during the era of COVID-19 pandemic and exposes new opportunities for future research in the domain.
Modern applications are both data and computationally intensive and require the storage and manipulation of voluminous traditional (alphanumeric) and nontraditional data sets (images, text, geometric objects, time-series). Examples of such emerging application domains are: Geographical Information Systems (GIS), Multimedia Information Systems, CAD/CAM, Time-Series Analysis, Medical Information Sstems, On-Line Analytical Processing (OLAP), and Data Mining. These applications pose diverse requirements with respect to the information and the operations that need to be supported. From the database perspective, new techniques and tools therefore need to be developed towards increased processing efficiency. This monograph explores the way spatial database management systems aim at supporting queries that involve the space characteristics of the underlying data, and discusses query processing techniques for nearest neighbor queries. It provides both basic concepts and state-of-the-art results in spatial databases and parallel processing research, and studies numerous applications of nearest neighbor queries.
The primary aim for this book is to gather and collate articles which represent the best and latest thinking in the domain of technology transfer, from research, academia and practice around the world. We envisage that the book will, as a result of this, represent an important source of knowledge in this domain to students (undergraduate and postgraduate), researchers, practitioners and consultants, chiefly in the software engineering and IT/industries, but also in management and other organisational and social disciplines. An important aspect of the book is the role that reflective practitioners (and not just academics) play. They will be involved in the production, and evaluation of contributions, as well as in the design and delivery of conference events, upon which of course, the book will be based.
This book constitutes the refereed proceedings of the 21st International TRIZ Future Conference on Automated Invention for Smart Industries, TFC 2021, held virtually in September 2021 and sponsored by IFIP WG 5.4. The 28 full papers and 8 short papers presented were carefully reviewed and selected from 48 submissions. They are organized in the following thematic sections: inventiveness and TRIZ for sustainable development; TRIZ, intellectual property and smart technologies; TRIZ: expansion in breadth and depth; TRIZ, data processing and artificial intelligence; and TRIZ use and divulgation for engineering design and beyond. Chapter 'Domain Analysis with TRIZ to Define an Effective "Design for Excellence' is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
This book highlights research that contributes to a better understanding of emerging challenges in information systems (IS) outsourcing. Important topics covered include: how to digitally innovate through IS outsourcing; how to govern outsourced digitalization projects; how to cope with complex multi-vendor and micro-services arrangements; how to manage data sourcing and data partnerships, including issues of cybersecurity; and how to cope with the increasing demands of internationalization and new sourcing models, such as crowdsourcing, cloud sourcing and robotic process automation. These issues are approached from the client's perspective, vendor's perspective, or both. Given its scope, the book will be of interest to all researchers and students in the fields of Information Systems, Management, and Organization, as well as corporate executives and professionals seeking a more profound analysis of the underlying factors and mechanisms of outsourcing.
Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.
Text Retrieval and Filtering: Analytical Models of Performance is the first book that addresses the problem of analytically computing the performance of retrieval and filtering systems. The book describes means by which retrieval may be studied analytically, allowing one to describe current performance, predict future performance, and to understand why systems perform as they do. The focus is on retrieving and filtering natural language text, with material addressing retrieval performance for the simple case of queries with a single term, the more complex case with multiple terms, both with term independence and term dependence, and for the use of grammatical information to improve performance. Unambiguous statements of the conditions under which one method or system will be more effective than another are developed. Text Retrieval and Filtering: Analytical Models of Performance focuses on the performance of systems that retrieve natural language text, considering full sentences as well as phrases and individual words. The last chapter explicitly addresses how grammatical constructs and methods may be studied in the context of retrieval or filtering system performance. The book builds toward solving this problem, although the material in earlier chapters is as useful to those addressing non-linguistic, statistical concerns as it is to linguists. Those interested in grammatical information should be cautioned to carefully examine earlier chapters, especially Chapters 7 and 8, which discuss purely statistical relationships between terms, before moving on to Chapter 10, which explicitly addresses linguistic issues. Text Retrieval and Filtering: Analytical Models of Performance is suitable as a secondary text for a graduate level course on Information Retrieval or Linguistics, and as a reference for researchers and practitioners in industry.
This book discusses the advancements in artificial intelligent techniques used in the well-being of human healthcare. It details the techniques used in collection, storage and analysis of data and their usage in different healthcare solutions. It also discusses the techniques of predictive analysis in early diagnosis of critical diseases. The edited book is divided into four parts - part A discusses introduction to artificial intelligence and machine learning in healthcare; part B highlights different analytical techniques used in healthcare; part C provides various security and privacy mechanisms used in healthcare; and finally, part D exemplifies different tools used in visualization and data analytics.
This book discusses the current research and concepts in data science and how these can be addressed using different nature-inspired optimization techniques. Focusing on various data science problems, including classification, clustering, forecasting, and deep learning, it explores how researchers are using nature-inspired optimization techniques to find solutions to these problems in domains such as disease analysis and health care, object recognition, vehicular ad-hoc networking, high-dimensional data analysis, gene expression analysis, microgrids, and deep learning. As such it provides insights and inspiration for researchers to wanting to employ nature-inspired optimization techniques in their own endeavors.
The five-volume set IFIP AICT 630, 631, 632, 633, and 634 constitutes the refereed proceedings of the International IFIP WG 5.7 Conference on Advances in Production Management Systems, APMS 2021, held in Nantes, France, in September 2021.*The 378 papers presented were carefully reviewed and selected from 529 submissions. They discuss artificial intelligence techniques, decision aid and new and renewed paradigms for sustainable and resilient production systems at four-wall factory and value chain levels. The papers are organized in the following topical sections: Part I: artificial intelligence based optimization techniques for demand-driven manufacturing; hybrid approaches for production planning and scheduling; intelligent systems for manufacturing planning and control in the industry 4.0; learning and robust decision support systems for agile manufacturing environments; low-code and model-driven engineering for production system; meta-heuristics and optimization techniques for energy-oriented manufacturing systems; metaheuristics for production systems; modern analytics and new AI-based smart techniques for replenishment and production planning under uncertainty; system identification for manufacturing control applications; and the future of lean thinking and practice Part II: digital transformation of SME manufacturers: the crucial role of standard; digital transformations towards supply chain resiliency; engineering of smart-product-service-systems of the future; lean and Six Sigma in services healthcare; new trends and challenges in reconfigurable, flexible or agile production system; production management in food supply chains; and sustainability in production planning and lot-sizing Part III: autonomous robots in delivery logistics; digital transformation approaches in production management; finance-driven supply chain; gastronomic service system design; modern scheduling and applications in industry 4.0; recent advances in sustainable manufacturing; regular session: green production and circularity concepts; regular session: improvement models and methods for green and innovative systems; regular session: supply chain and routing management; regular session: robotics and human aspects; regular session: classification and data management methods; smart supply chain and production in society 5.0 era; and supply chain risk management under coronavirus Part IV: AI for resilience in global supply chain networks in the context of pandemic disruptions; blockchain in the operations and supply chain management; data-based services as key enablers for smart products, manufacturing and assembly; data-driven methods for supply chain optimization; digital twins based on systems engineering and semantic modeling; digital twins in companies first developments and future challenges; human-centered artificial intelligence in smart manufacturing for the operator 4.0; operations management in engineer-to-order manufacturing; product and asset life cycle management for smart and sustainable manufacturing systems; robotics technologies for control, smart manufacturing and logistics; serious games analytics: improving games and learning support; smart and sustainable production and supply chains; smart methods and techniques for sustainable supply chain management; the new digital lean manufacturing paradigm; and the role of emerging technologies in disaster relief operations: lessons from COVID-19 Part V: data-driven platforms and applications in production and logistics: digital twins and AI for sustainability; regular session: new approaches for routing problem solving; regular session: improvement of design and operation of manufacturing systems; regular session: crossdock and transportation issues; regular session: maintenance improvement and lifecycle management; regular session: additive manufacturing and mass customization; regular session: frameworks and conceptual modelling for systems and services efficiency; regular session: optimization of production and transportation systems; regular session: optimization of supply chain agility and reconfigurability; regular session: advanced modelling approaches; regular session: simulation and optimization of systems performances; regular session: AI-based approaches for quality and performance improvement of production systems; and regular session: risk and performance management of supply chains *The conference was held online.
Wikipedia, Flickr, You Tube, Facebook, LinkedIn are all examples of large community-built databases, although with quite diverse purposes and collaboration patterns. Their usage and dissemination will further grow introducing e.g. new semantics, personalization, or interactive media. Pardede delivers the first comprehensive research reference on community-built databases. The contributions discuss various technical and social aspects of research in and development in areas like in Web science, social networks, and collaborative information systems. Pardede delivers the first comprehensive research reference on community-built databases. The contributions discuss various technical and social aspects of research in and development in areas like in Web science, social networks, and collaborative information systems.
This book explores the core themes of the Fourth Industrial Revolution (4IR) highlighting the digital transformation that has been occurring in society and business. Representing an interface between technologies in the physical, digital and biological disciplines the book explores emerging technologies such as artificial intelligence, robotics, the Internet of Things, autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing. The findings of collaborative research studies on the potential impact of the 4IR on the labour markets, occupations, future workforce competencies and skills associated with eight industry sectors in Australia are reported. The sectors are: agriculture and mining; manufacturing and logistics; health, medical and nursing; education; retail; financial services; government services and tourism.
Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.
Successfully competing in the new global economy requires immediate decision capability. This immediate decision capability requires quick analysis of both timely and relevant data. To support this analysis, organizations are piling up mountains of business data in their databases every day. Terabyte-sized (1,000 megabytes) databases are commonplace in organizations today, and this enormous growth will make petabyte-sized databases (1,000 terabytes) a reality within the next few years (Whiting, 2002). Those organizations making swift, fact-based decisions by optimally leveraging their data resources will outperform those organizations that do not. A technology that facilitates this process of optimal decision-making is known as Organizational Data Mining (ODM). Organizational Data Mining: Leveraging Enterprise Data Resources for Optimal Performance demonstrates how organizations can leverage ODM for enhanced competitiveness and optimal performance.
Many business decisions are made in the absence of complete information about the decision consequences. Credit lines are approved without knowing the future behavior of the customers; stocks are bought and sold without knowing their future prices; parts are manufactured without knowing all the factors affecting their final quality; etc. All these cases can be categorized as decision making under uncertainty. Decision makers (human or automated) can handle uncertainty in different ways. Deferring the decision due to the lack of sufficient information may not be an option, especially in real-time systems. Sometimes expert rules, based on experience and intuition, are used. Decision tree is a popular form of representing a set of mutually exclusive rules. An example of a two-branch tree is: if a credit applicant is a student, approve; otherwise, decline. Expert rules are usually based on some hidden assumptions, which are trying to predict the decision consequences. A hidden assumption of the last rule set is: a student will be a profitable customer. Since the direct predictions of the future may not be accurate, a decision maker can consider using some information from the past. The idea is to utilize the potential similarity between the patterns of the past (e.g., "most students used to be profitable") and the patterns of the future (e.g., "students will be profitable").
Information retrieval is the science concerned with the effective and efficient retrieval of documents starting from their semantic content. It is employed to fulfill some information need from a large number of digital documents. Given the ever-growing amount of documents available and the heterogeneous data structures used for storage, information retrieval has recently faced and tackled novel applications. In this book, Melucci and Baeza-Yates present a wide-spectrum illustration of recent research results in advanced areas related to information retrieval. Readers will find chapters on e.g. aggregated search, digital advertising, digital libraries, discovery of spam and opinions, information retrieval in context, multimedia resource discovery, quantum mechanics applied to information retrieval, scalability challenges in web search engines, and interactive information retrieval evaluation. All chapters are written by well-known researchers, are completely self-contained and comprehensive, and are complemented by an integrated bibliography and subject index. With this selection, the editors provide the most up-to-date survey of topics usually not addressed in depth in traditional (text)books on information retrieval. The presentation is intended for a wide audience of people interested in information retrieval: undergraduate and graduate students, post-doctoral researchers, lecturers, and industrial researchers.
Foundations of Dependable Computing: Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. A companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.
Compression and Coding Algorithms describes in detail the coding
mechanisms that are available for use in data compression systems.
The well known Huffman coding technique is one mechanism, but there
have been many others developed over the past few decades, and this
book describes, explains and assesses them. People undertaking
research of software development in the areas of compression and
coding algorithms will find this book an indispensable reference.
In particular, the careful and detailed description of algorithms
and their implementation, plus accompanying pseudo-code that can be
readily implemented on computer, make this book a definitive
reference in an area currently without one.
This book carefully defines the technologies involved in web service composition and provides a formal basis for all of the composition approaches and shows the trade-offs among them. By considering web services as a deep formal topic, some surprising results emerge, such as the possibility of eliminating workflows. It examines the immense potential of web services composition for revolutionizing business IT as evidenced by the marketing of Service Oriented Architectures (SOAs). The author begins with informal considerations and builds to the formalisms slowly, with easily-understood motivating examples. Chapters examine the importance of semantics for web services and ways to apply semantic technologies. Topics included range from model checking and Golog to WSDL and AI planning. This book is based upon lectures given to economics students and is suitable for business technologist with some computer science background. The reader can delve as deeply into the technologies as desired.
The Testability of Distributed Real-Time Systems starts by collecting and analyzing all principal problems, as well as their interrelations that one has to keep in mind wh4en testing a distributed real-time system. The book discusses them in some detail from the viewpoints of software engineering, distributed systems principles, and real-time system development. These problems are organization, observability, reproducibility, the host/target approach, environment simulation, and (test) representativity. Based on this framework, the book summarizes and evaluates the current work done in this area before going on to argue that the particular system architecture (hardware plus operating system) has a much greater influence on testing than is the case for ordinary', non-real-time software. The notions of event-triggered and time-triggered system architectures are introduced, and its is shown that time-triggered systems automatically' (i.e. by the nature of their system architecture) solve or greatly ease solving of some of the problems introduced earlier, i.e. observability, reproducibility, and (partly) representativity.A test methodology is derived for the time-triggered, distributed real-time system MARS. The book describes in detail how the author has taken advantage of its architecture, and shows how the remaining problems can be solved for this particular system architecture. Some experiments conducted to evaluate this test methodology are reported, including the experience gained from them, leading to a description of a number of prototype support tools.The Testability of Distributed Real-Time Systems can be used by both academic and industrial researchers interested in distributedand/or real-time systems, or in software engineering for such systems. This book can also be used as a text in advanced courses on distributed or real-time systems.
Advances In Digital Government presents a collection of in-depth articles that addresses a representative cross-section of the matrix of issues involved in implementing digital government systems. These articles constitute a survey of both the technical and policy dimensions related to the design, planning and deployment of digital government systems. The research and development projects within the technical dimension represent a wide range of governmental functions, including the provisioning of health and human services, management of energy information, multi-agency integration, and criminal justice applications. The technical issues dealt with in these projects include database and ontology integration, distributed architectures, scalability, and security and privacy. The human factors research emphasizes compliance with access standards for the disabled and the policy articles contain both conceptual models for developing digital government systems as well as real management experiences and results in deploying them. Advances In Digital Government presents digital government issues from the perspectives of different communities and societies. This geographic and social diversity illuminates a unique array of policy and social perspectives, exposing practitioners to new and useful ways of thinking about digital government.
This book shows C# developers how to use C# 2008 and ADO.NET 3.5 to develop database applications the way the best professionals do. After an introductory section, section 2 shows how to use data sources and datasets for Rapid Application Development and prototyping of Windows Forms applications. Section 3 shows how to build professional 3-layer applications that consist of presentation, business, and database classes. Section 4 shows how to use the new LINQ feature to work with data structures like datasets, SQL Server databases, and XML documents. And section 5 shows how to build database applications by using the new Entity Framework to map business objects to database objects. To ensure mastery, this book presents 23 complete database applications that demonstrate best programming practices. And it's all done in the distinctive Murach style that has been training professional developers for 35 years. |
![]() ![]() You may like...
Bitcoin And Cryptocurrency - The…
Crypto Trader & Crypto Gladiator
Hardcover
Role of 6g Wireless Networks in AI and…
Malaya Dutta Borah, Steven A. Wright, …
Hardcover
R7,081
Discovery Miles 70 810
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
Blockchain and AI Technology in the…
Subhendu Kumar Pani, Sian Lun Lau, …
Hardcover
R7,039
Discovery Miles 70 390
Handbook of Research on Big Data…
Jose Machado, Hugo Peixoto, …
Hardcover
R12,092
Discovery Miles 120 920
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
|