![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
The first of a two volume set on novel methods in harmonic analysis, this book draws on a number of original research and survey papers from well-known specialists detailing the latest innovations and recently discovered links between various fields. Along with many deep theoretical results, these volumes contain numerous applications to problems in signal processing, medical imaging, geodesy, statistics, and data science. The chapters within cover an impressive range of ideas from both traditional and modern harmonic analysis, such as: the Fourier transform, Shannon sampling, frames, wavelets, functions on Euclidean spaces, analysis on function spaces of Riemannian and sub-Riemannian manifolds, Fourier analysis on manifolds and Lie groups, analysis on combinatorial graphs, sheaves, co-sheaves, and persistent homologies on topological spaces. Volume I is organized around the theme of frames and other bases in abstract and function spaces, covering topics such as: The advanced development of frames, including Sigma-Delta quantization for fusion frames, localization of frames, and frame conditioning, as well as applications to distributed sensor networks, Galerkin-like representation of operators, scaling on graphs, and dynamical sampling. A systematic approach to shearlets with applications to wavefront sets and function spaces. Prolate and generalized prolate functions, spherical Gauss-Laguerre basis functions, and radial basis functions. Kernel methods, wavelets, and frames on compact and non-compact manifolds.
Forecasting is a crucial function for companies in the fashion industry, but for many real-life forecasting applications in the, the data patterns are notorious for being highly volatile and it is very difficult, if not impossible, to analytically learn about the underlying patterns. As a result, many traditional methods (such as pure statistical models) will fail to make a sound prediction. Over the past decade, advances in artificial intelligence and computing technologies have provided an alternative way of generating precise and accurate forecasting results for fashion businesses. Despite being an important and timely topic, there is currently an absence of a comprehensive reference source that provides up-to-date theoretical and applied research findings on the subject of intelligent fashion forecasting systems. This three-part handbook fulfills this need and covers materials ranging from introductory studies and technical reviews, theoretical modeling research, to intelligent fashion forecasting applications and analysis. This book is suitable for academic researchers, graduate students, senior undergraduate students and practitioners who are interested in the latest research on fashion forecasting.
Learn how to create, train, and tweak large language models (LLMs) by building one from the ground up! In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You’ll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
Build a Large Language Model (from Scratch) takes you inside the AI black box to tinker with the internal systems that power generative AI. As you work through each key stage of LLM creation, you’ll develop an in-depth understanding of how LLMs work, their limitations, and their customization methods. Your LLM can be developed on an ordinary laptop, and used as your own personal assistant.
This book constitutes the refereed proceedings of the 16th IFIP WG 9.4 International Conference on Social Implications of Computers in Developing Countries, ICT4D 2020, which was supposed to be held in Salford, UK, in June 2020, but was held virtually instead due to the COVID-19 pandemic. The 18 revised full papers presented were carefully reviewed and selected from 29 submissions. The papers present a wide range of perspectives and disciplines including (but not limited to) public administration, entrepreneurship, business administration, information technology for development, information management systems, organization studies, philosophy, and management. They are organized in the following topical sections: digital platforms and gig economy; education and health; inclusion and participation; and business innovation and data privacy.
th I3E 2010 marked the 10 anniversary of the IFIP Conference on e-Business, e- Services, and e-Society, continuing a tradition that was invented in 1998 during the International Conference on Trends in Electronic Commerce, TrEC 1998, in Hamburg (Germany). Three years later the inaugural I3E 2001 conference was held in Zurich (Switzerland). Since then I3E has made its journey through the world: 2002 Lisbon (Portugal), 2003 Sao Paulo (Brazil), 2004 Toulouse (France), 2005 Poznan (Poland), 2006 Turku (Finland), 2007 Wuhan (China), 2008 Tokyo (Japan), and 2009 Nancy (France). I3E 2010 took place in Buenos Aires (Argentina) November 3-5, 2010. Known as "The Pearl" of South America, Buenos Aires is a cosmopolitan, colorful, and vibrant city, surprising its visitors with a vast variety of cultural and artistic performances, European architecture, and the passion for tango, coffee places, and football disc- sions. A cultural reference in Latin America, the city hosts 140 museums, 300 theaters, and 27 public libraries including the National Library. It is also the main educational center in Argentina and home of renowned universities including the U- versity of Buenos Aires, created in 1821. Besides location, the timing of I3E 2010 is th also significant--it coincided with the 200 anniversary celebration of the first local government in Argentina.
This book introduces the basic methodologies for successful data analytics. Matrix optimization and approximation are explained in detail and extensively applied to dimensionality reduction by principal component analysis and multidimensional scaling. Diffusion maps and spectral clustering are derived as powerful tools. The methodological overlap between data science and machine learning is emphasized by demonstrating how data science is used for classification as well as supervised and unsupervised learning.
Uncertain data is inherent in many important applications, such as environmental surveillance, market analysis, and quantitative economics research. Due to the importance of those applications and rapidly increasing amounts of uncertain data collected and accumulated, analyzing large collections of uncertain data has become an important task. Ranking queries (also known as top-k queries) are often natural and useful in analyzing uncertain data. "Ranking Queries on Uncertain Data" discusses the motivations/applications, challenging problems, the fundamental principles, and the evaluation algorithms of ranking queries on uncertain data. Theoretical and algorithmic results of ranking queries on uncertain data are presented in the last section of this book. "Ranking Queries on Uncertain Data" is the first book to systematically discuss the problem of ranking queries on uncertain data.
This book describes the latest methods and tools for the management of information within facility management services and explains how it is possible to collect, organize, and use information over the life cycle of a building in order to optimize the integration of these services and improve the efficiency of processes. The coverage includes presentation and analysis of basic concepts, procedures, and international standards in the development and management of real estate inventories, building registries, and information systems for facility management. Models of strategic management are discussed and the functions and roles of the strategic management center, explained. Detailed attention is also devoted to building information modeling (BIM) for facility management and potential interactions between information systems and BIM applications. Criteria for evaluating information system performance are identified, and guidelines of value in developing technical specifications for facility management services are proposed. The book will aid clients and facility managers in ensuring that information bases are effectively compiled and used in order to enhance building maintenance and facility management.
The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. The book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
Understanding the latest capabilities in the cyber threat landscape as well as the cyber forensic challenges and approaches is the best way users and organizations can prepare for potential negative events. Adopting an experiential learning approach, this book describes how cyber forensics researchers, educators and practitioners can keep pace with technological advances, and acquire the essential knowledge and skills, ranging from IoT forensics, malware analysis, and CCTV and cloud forensics to network forensics and financial investigations. Given the growing importance of incident response and cyber forensics in our digitalized society, this book will be of interest and relevance to researchers, educators and practitioners in the field, as well as students wanting to learn about cyber forensics.
This book provides comprehensive coverage of fundamentals of database management system. It contains a detailed description on Relational Database Management System Concepts. There are a variety of solved examples and review questions with solutions. This book is for those who require a better understanding of relational data modeling, its purpose, its nature, and the standards used in creating relational data model.
This book provides the state-of-the-art development on security and privacy for fog/edge computing, together with their system architectural support and applications. This book is organized into five parts with a total of 15 chapters. Each area corresponds to an important snapshot. The first part of this book presents an overview of fog/edge computing, focusing on its relationship with cloud technology and the future with the use of 5G communication. Several applications of edge computing are discussed. The second part of this book considers several security issues in fog/edge computing, including the secure storage and search services, collaborative intrusion detection method on IoT-fog computing, and the feasibility of deploying Byzantine agreement protocols in untrusted environments. The third part of this book studies the privacy issues in fog/edge computing. It first investigates the unique privacy challenges in fog/edge computing, and then discusses a privacy-preserving framework for the edge-based video analysis, a popular machine learning application on fog/edge. This book also covers the security architectural design of fog/edge computing, including a comprehensive overview of vulnerabilities in fog/edge computing within multiple architectural levels, the security and intelligent management, the implementation of network-function-virtualization-enabled multicasting in part four. It explains how to use the blockchain to realize security services. The last part of this book surveys applications of fog/edge computing, including the fog/edge computing in Industrial IoT, edge-based augmented reality, data streaming in fog/edge computing, and the blockchain-based application for edge-IoT. This book is designed for academics, researchers and government officials, working in the field of fog/edge computing and cloud computing. Practitioners, and business organizations (e.g., executives, system designers, and marketing professionals), who conduct teaching, research, decision making, and designing fog/edge technology will also benefit from this book The content of this book will be particularly useful for advanced-level students studying computer science, computer technology, and information systems, but also applies to students in business, education, and economics, who would benefit from the information, models, and case studies therein.
This book develops an IT strategy for cloud computing that helps businesses evaluate their readiness for cloud services and calculate the ROI. The framework provided helps reduce risks involved in transitioning from traditional "on site" IT strategy to virtual "cloud computing." Since the advent of cloud computing, many organizations have made substantial gains implementing this innovation. Cloud computing allows companies to focus more on their core competencies, as IT enablement is taken care of through cloud services. Cloud Computing and ROI includes case studies covering retail, automobile and food processing industries. Each of these case studies have successfully implemented the cloud computing framework and their strategies are explained. As cloud computing may not be ideal for all businesses, criteria are also offered to help determine if this strategy should be adopted.
In practice, the design and architecture of a cloud varies among cloud providers. We present a generic evaluation framework for the performance, availability and reliability characteristics of various cloud platforms. We describe a generic benchmark architecture for cloud databases, specifically NoSQL database as a service. It measures the performance of replication delay and monetary cost. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers. The specifications of existing service level agreements (SLA) for cloud services are not designed to flexibly handle even relatively straightforward performance and technical requirements of consumer applications. We present a novel approach for SLA-based management of cloud-hosted databases from the consumer perspective and an end-to-end framework for consumer-centric SLA management of cloud-hosted databases. The framework facilitates adaptive and dynamic provisioning of the database tier of the software applications based on application-defined policies for satisfying their own SLA performance requirements, avoiding the cost of any SLA violation and controlling the monetary cost of the allocated computing resources. In this framework, the SLA of the consumer applications are declaratively defined in terms of goals which are subjected to a number of constraints that are specific to the application requirements. The framework continuously monitors the application-defined SLA and automatically triggers the execution of necessary corrective actions (scaling out/in the database tier) when required. The framework is database platform-agnostic, uses virtualization-based database replication mechanisms and requires zero source code changes of the cloud-hosted software applications.
This book examines recent developments in semantic systems that can respond to situations and environments and events. The contributors to this book cover how to design, implement and utilize disruptive technologies. The editor discusses the two fundamental sets of disruptive technologies: the development of semantic technologies including description logics, ontologies and agent frameworks; and the development of semantic information rendering and graphical forms of displays of high-density time-sensitive data to improve situational awareness. Beyond practical illustrations of emerging technologies, the editor proposes to utilize an incremental development method called knowledge scaffolding -a proven educational psychology technique for learning a subject matter thoroughly. The goal of this book is to help readers learn about managing information resources, from the ground up and reinforcing the learning as they read on.
NewInternetdevelopmentsposegreaterandgreaterprivacydilemmas. Inthe- formation Society, the need for individuals to protect their autonomy and retain control over their personal information is becoming more and more important. Today, informationandcommunicationtechnologies-andthepeopleresponsible for making decisions about them, designing, and implementing them-scarcely consider those requirements, thereby potentially putting individuals' privacy at risk. The increasingly collaborative character of the Internet enables anyone to compose services and contribute and distribute information. It may become hard for individuals to manage and control information that concerns them and particularly how to eliminate outdated or unwanted personal information, thus leavingpersonalhistoriesexposedpermanently. Theseactivitiesraisesubstantial new challenges for personal privacy at the technical, social, ethical, regulatory, and legal levels: How can privacy in emerging Internet applications such as c- laborative scenarios and virtual communities be protected? What frameworks and technical tools could be utilized to maintain life-long privacy? DuringSeptember3-10,2009, IFIP(InternationalFederationforInformation Processing)workinggroups9. 2 (Social Accountability),9. 6/11. 7(IT Misuseand theLaw),11. 4(NetworkSecurity)and11. 6(IdentityManagement)heldtheir5th InternationalSummerSchoolincooperationwiththeEUFP7integratedproject PrimeLife in Sophia Antipolis and Nice, France. The focus of the event was on privacy and identity managementfor emerging Internet applications throughout a person's lifetime. The aim of the IFIP Summer Schools has been to encourage young a- demic and industry entrants to share their own ideas about privacy and identity management and to build up collegial relationships with others. As such, the Summer Schools havebeen introducing participants to the social implications of information technology through the process of informed discussion.
This textbook introduces new business concepts on cloud environments such as secure, scalable anonymity and practical payment protocols for the Internet of things and Blockchain technology. The protocol uses electronic cash for payment transactions. In this new protocol, from the viewpoint of banks, consumers can improve anonymity if they are worried about disclosure of their identities in the cloud. Currently, there is not a book available that has reported the techniques covering the protocols with anonymizations and Blockchain technology. Thus this will be a useful book for universities to purchase. This textbook provides new direction for access control management and online business, with new challenges within Blockchain technology that may arise in cloud environments. One is related to the authorization granting process. For example, when a role is granted to a user, this role may conflict with other roles of the user or together with this role; the user may have or derive a high level of authority. Another is related to authorization revocation. For instance, when a role is revoked from a user, the user may still have the role. Experts will get benefits from these challenges through the developed methodology for authorization granting algorithm, and weak revocation and strong revocation algorithms.
The three-volume set IFIP AICT 368-370 constitutes the refereed post-conference proceedings of the 5th IFIP TC 5, SIG 5.1 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2011, held in Beijing, China, in October 2011. The 189 revised papers presented were carefully selected from numerous submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including simulation models and decision-support systems for agricultural production, agricultural product quality testing, traceability and e-commerce technology, the application of information and communication technology in agriculture, and universal information service technology and service systems development in rural areas. The 59 papers included in the third volume focus on simulation, optimization, monitoring, and control technology.
Based on more than 10 years of teaching experience, Blanken and his coeditors have assembled all the topics that should be covered in advanced undergraduate or graduate courses on multimedia retrieval and multimedia databases. The single chapters of this textbook explain the general architecture of multimedia information retrieval systems and cover various metadata languages such as Dublin Core, RDF, or MPEG. The authors emphasize high-level features and show how these are used in mathematical models to support the retrieval process. For each chapter, there 's detail on further reading, and additional exercises and teaching material is available online.
The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003. To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally. Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information - a multimodal approach - for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web.
Trustworthy Ubiquitous Computing covers aspects of trust in ubiquitous computing environments. The aspects of context, privacy, reliability, usability and user experience related to "emerged and exciting new computing paradigm of Ubiquitous Computing", includes pervasive, grid, and peer-to-peer computing including sensor networks to provide secure computing and communication services at anytime and anywhere. Marc Weiser presented his vision of disappearing and ubiquitous computing more than 15 years ago. The big picture of the computer introduced into our environment was a big innovation and the starting point for various areas of research. In order to totally adopt the idea of ubiquitous computing several houses were build, equipped with technology and used as laboratory in order to find and test appliances that are useful and could be made available in our everyday life. Within the last years industry picked up the idea of integrating ubiquitous computing and already available products like remote controls for your house were developed and brought to the market. In spite of many applications and projects in the area of ubiquitous and pervasive computing the success is still far away. One of the main reasons is the lack of acceptance of and confidence in this technology. Although researchers and industry are working in all of these areas a forum to elaborate security, reliability and privacy issues, that resolve in trustworthy interfaces and computing environments for people interacting within these ubiquitous environments is important. The user experience factor of trust thus becomes a crucial issue for the success of a UbiComp application. The goal of this book is to provide a state the art of Trustworthy Ubiquitous Computing to address recent research results and to present and discuss the ideas, theories, technologies, systems, tools, applications and experiences on all theoretical and practical issues.
Information infrastructures are integrated solutions based on the fusion of information and communication technologies. They are characterized by the large amount of data that must be managed accordingly. An information infrastructure requires an efficient and effective information retrieval system to provide access to the items stored in the infrastructure. Terminological Ontologies: Design, Management and Practical Applications presents the main problems that affect the discovery systems of information infrastructures to manage terminological models, and introduces a combination of research tools and applications in Semantic Web technologies. This book specifically analyzes the need to create, relate, and integrate the models required for an infrastructure by elaborating on the problem of accessing these models in an efficient manner via interoperable services and components. Terminological Ontologies: Design, Management and Practical Applications is geared toward information management systems and semantic web professionals working as project managers, application developers, government workers and more. Advanced undergraduate and graduate level students, professors and researchers focusing on computer science will also find this book valuable as a secondary text or reference book.
The World Wide Web can be considered a huge library that in consequence needs a capable librarian responsible for the classification and retrieval of documents as well as the mediation between library resources and users. Based on this idea, the concept of the "Librarian of the Web" is introduced which comprises novel, librarian-inspired methods and technical solutions to decentrally search for text documents in the web using peer-to-peer technology. The concept's implementation in the form of an interactive peer-to-peer client, called "WebEngine", is elaborated on in detail. This software extends and interconnects common web servers creating a fully integrated, decentralised and self-organising web search system on top of the existing web structure. Thus, the web is turned into its own powerful search engine without the need for any central authority. This book is intended for researchers and practitioners having a solid background in the fields of Information Retrieval and Web Mining.
With the proliferation of social media and on-line communities in networked world a large gamut of data has been collected and stored in databases. The rate at which such data is stored is growing at a phenomenal rate and pushing the classical methods of data analysis to their limits. This book presents an integrated framework of recent empirical and theoretical research on social network analysis based on a wide range of techniques from various disciplines like data mining, social sciences, mathematics, statistics, physics, network science, machine learning with visualization techniques and security. The book illustrates the potential of multi-disciplinary techniques in various real life problems and intends to motivate researchers in social network analysis to design more effective tools by integrating swarm intelligence and data mining.
This volume presents a collection of carefully selected contributions in the area of social media analysis. Each chapter opens up a number of research directions that have the potential to be taken on further in this rapidly growing area of research. The chapters are diverse enough to serve a number of directions of research with Sentiment Analysis as the dominant topic in the book. The authors have provided a broad range of research achievements from multimodal sentiment identification to emotion detection in a Chinese microblogging website. The book will be useful to research students, academics and practitioners in the area of social media analysis. |
![]() ![]() You may like...
Evolution Equations and Lagrangian…
Anvarbek M. Meirmanov, Vladislav V. Pukhnachov, …
Hardcover
R6,052
Discovery Miles 60 520
Numerical Methods for Delay Differential…
Alfredo Bellen, Marino Zennaro
Hardcover
R5,542
Discovery Miles 55 420
Richardson Extrapolation - Practical…
Zahari Zlatev, Ivan Dimov, …
Hardcover
R4,609
Discovery Miles 46 090
Optimal Control of Complex Structures
Karl-Heinz Hoffmann, Irena Lasiecka, …
Hardcover
R2,605
Discovery Miles 26 050
Electromagnetic Wave Diffraction by…
Smirnov, Ilyinsky
Hardcover
Boundary Elements and other Mesh…
A. H.-D. Cheng, S. Syngellakis
Hardcover
R3,368
Discovery Miles 33 680
|