![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
This book offers practical as well as conceptual knowledge of the latest trends, tools, techniques and methodologies of data analytics in smart cities. The smart city is an advanced technological area that is capable of understanding the environment by examining the data to improve the livability. The smart cities allow different kinds of wireless sensors to gather massive amounts, full speed and a broad range of city data. The smart city has a focus on data analytics facilitated through the IoT platforms. There is a need to customize the IoT architecture and infrastructures to address needs in application of specific domains of smart cities such as transportation, traffic, health and, environment. The smart cities will provide next generation development technologies for urbanization that includes the need of environmental sustainability, personalization, mobility, optimum energy utilization, better administrative services and higher quality of life. Each chapter presents the reader with an in-depth investigation regarding the possibility of data analytics perspective in smart cities. The book presents cutting-edge and future perspectives of smart cities, where industry experts, scientists, and scholars exchange ideas and experience about surrounding frontier technologies, breakthrough and innovative solutions and applications.
This monograph presents a collection of major developments leading toward the implementation of white space technology - an emerging wireless standard for using wireless spectrum in locations where it is unused by licensed users. Some of the key research areas in the field are covered. These include emerging standards, technical insights from early pilots and simulations, software defined radio platforms, geo-location spectrum databases and current white space spectrum usage in India and South Africa.
The increasing penetration of IT in organizations calls for an integrative perspective on enterprises and their supporting information systems. MERODE offers an intuitive and practical approach to enterprise modelling and using these models as core for building enterprise information systems. From a business analyst perspective, benefits of the approach are its simplicity and the possibility to evaluate the consequences of modeling choices through fast prototyping, without requiring any technical experience. The focus on domain modelling ensures the development of a common language for talking about essential business concepts and of a shared understanding of business rules. On the construction side, experienced benefits of the approach are a clear separation between specification and implementation, more generic and future-proof systems, and an improved insight in the cost of changes. A first distinguishing feature is the method's grounding in process algebra provides clear criteria and practical support for model quality. Second, the use of the concept of business events provides a deep integration between structural and behavioral aspects. The clear and intuitive semantics easily extend to application integration (COTS software and Web Services). Students and practitioners are the book's main target audience, as both groups will benefit from its practical advice on how to create complete models which combine structural and behavioral views of a system-to-be and which can readily be transformed into code, and on how to evaluate the quality of those models. In addition, researchers in the area of conceptual or enterprise modelling will find a concise overview of the main findings related to the MERODE project. The work is complemented by a wealth of extra material on the author's web page at KU Leuven, including a free CASE tool with code generator, a collection of cases with solutions, and a set of domain modelling patterns that have been developed on the basis of the method's use in industry and government.
This book constitutes the refereed proceedings of the 36th IFIP TC 11 International Conference on Information Security and Privacy Protection, SEC 2021, held in Oslo, Norway, in June 2021.*The 28 full papers presented were carefully reviewed and selected from 112 submissions. The papers present novel research on theoretical and practical aspects of security and privacy protection in ICT systems. They are organized in topical sections on digital signatures; vulnerability management; covert channels and cryptography; application and system security; privacy; network security; machine learning for security; and security management. *The conference was held virtually.
The Green and Virtual Data Center sets aside the political aspects of what is or is not considered green to instead focus on the opportunities for organizations that want to sustain environmentally-friendly economical growth. If you are willing to believe that IT infrastructure resources deployed in a highly virtualized manner can be combined with other technologies to achieve simplified and cost-effective delivery of services in a green, profitable manner, this book is for you. Savvy industry veteran Greg Schulz provides real-world insight, addressing best practices, server, software, storage, networking, and facilities issues concerning any current or next-generation virtual data center that relies on underlying physical infrastructures. Coverage includes: Energy and data footprint reduction Cloud-based storage and computing Intelligent and adaptive power management Server, storage, and networking virtualization Tiered servers and storage, network, and data centers Energy avoidance and energy efficiency Many current and emerging technologies can enable a green and efficient virtual data center to support and sustain business growth with a reasonable return on investment. This book presents virtually all critical IT technologies and techniques to discuss the interdependencies that need to be supported to enable a dynamic, energy-efficient, economical, and environmentally-friendly green IT data center. This is a path that every organization must ultimately follow. Take a tour of the Green and Virtual Data Center website. CRC Press is pleased to announce that The Green and Virtual Data Center has been added to Intel Corporation's Recommended Reading List. Intel's Recommended Reading program provides technical professionals a simple and handy reference list
Focusing on a data-centric perspective, this book provides a complete overview of data mining: its uses, methods, current technologies, commercial products, and future challenges. Three parts divide Data Mining: Part I describes technologies for data mining - database systems, warehousing, machine learning, visualization, decision support, statistics, parallel processing, and architectural support for data mining Part II presents tools and techniques - getting the data ready, carrying out the mining, pruning the results, evaluating outcomes, defining specific approaches, examining a specific technique based on logic programming, and citing literature and vendors for up-to-date information Part III examines emerging trends - mining distributed and heterogeneous data sources; multimedia data, such as text, images, video; mining data on the World Wide Web; metadata aspects of mining; and privacy issues. This self-contained book also contains two appendices providing exceptional information on technologies, such as data management, and artificial intelligence. Is there a need for mining? Do you have the right tools? Do you have the people to do the work? Do you have sufficient funds allocated to the project? All these answers must be answered before embarking on a project. Data Mining provides singular guidance on appropriate applications for specific techniques as well as thoroughly assesses valuable product information.
As Web-based systems and e-commerce carry businesses into the 21st century, databases are becoming workhorses that shoulder each and every online transaction. For organizations to have effective 24/7 Web operations, they need powerhouse databases that deliver at peak performance-all the time. High Performance Web Databases: Design, Development, and Deployment arms you with every essential technique from design and modeling to advanced topics such as data conversion, performance tuning, Web access and interfacing legacy systems, and security
The aim of this book is to provide an internationally respected collection of scientific research methods, technologies and applications in the area of data science. This book can prove useful to the researchers, professors, research students and practitioners as it reports novel research work on challenging topics in the area surrounding data science. In this book, some of the chapters are written in tutorial style concerning machine learning algorithms, data analysis, information design, infographics, relevant applications, etc. The book is structured as follows: * Part I: Data Science: Theory, Concepts, and Algorithms This part comprises five chapters on data Science theory, concepts, techniques and algorithms. * Part II: Data Design and Analysis This part comprises five chapters on data design and analysis. * Part III: Applications and New Trends in Data Science This part comprises four chapters on applications and new trends in data science.
This book starts from the relationship between urban built environment and travel behavior and focuses on analyzing the origin of traffic phenomena behind the data through multi-source traffic big data, which makes the book unique and different from the previous data-driven traffic big data analysis literature. This book focuses on understanding, estimating, predicting, and optimizing mobility patterns. Readers can find multi-source traffic big data processing methods, related statistical analysis models, and practical case applications from this book. This book bridges the gap between traffic big data, statistical analysis models, and mobility pattern analysis with a systematic investigation of traffic big data's impact on mobility patterns and urban planning.
Pattern Recognition Algorithms for Data Mining addresses different pattern recognition (PR) tasks in a unified framework with both theoretical and experimental results. Tasks covered include data condensation, feature selection, case generation, clustering/classification, and rule generation and evaluation. This volume presents various theories, methodologies, and algorithms, using both classical approaches and hybrid paradigms. The authors emphasize large datasets with overlapping, intractable, or nonlinear boundary classes, and datasets that demonstrate granular computing in soft frameworks. Organized into eight chapters, the book begins with an introduction to PR, data mining, and knowledge discovery concepts. The authors analyze the tasks of multi-scale data condensation and dimensionality reduction, then explore the problem of learning with support vector machine (SVM). They conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm.
Digital forensics deals with the acquisition, preservation, examination, analysis and presentation of electronic evidence. Computer networks, cloud computing, smartphones, embedded devices and the Internet of Things have expanded the role of digital forensics beyond traditional computer crime investigations. Practically every crime now involves some aspect of digital evidence; digital forensics provides the techniques and tools to articulate this evidence in legal proceedings. Digital forensics also has myriad intelligence applications; furthermore, it has a vital role in cyber security -- investigations of security breaches yield valuable information that can be used to design more secure and resilient systems. Advances in Digital Forensics XVI describes original research results and innovative applications in the discipline of digital forensics. In addition, it highlights some of the major technical and legal issues related to digital evidence and electronic crime investigations. The areas of coverage include: themes and issues, forensic techniques, filesystem forensics, cloud forensics, social media forensics, multimedia forensics, and novel applications. This book is the sixteenth volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.9 on Digital Forensics, an international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in digital forensics. The book contains a selection of sixteen edited papers from the Sixteenth Annual IFIP WG 11.9 International Conference on Digital Forensics, held in New Delhi, India, in the winter of 2020. Advances in Digital Forensics XVI is an important resource for researchers, faculty members and graduate students, as well as for practitioners and individuals engaged in research and development efforts for the law enforcement and intelligence communities.
This book highlights state-of-the-art research on big data and the Internet of Things (IoT), along with related areas to ensure efficient and Internet-compatible IoT systems. It not only discusses big data security and privacy challenges, but also energy-efficient approaches to improving virtual machine placement in cloud computing environments. Big data and the Internet of Things (IoT) are ultimately two sides of the same coin, yet extracting, analyzing and managing IoT data poses a serious challenge. Accordingly, proper analytics infrastructures/platforms should be used to analyze IoT data. Information technology (IT) allows people to upload, retrieve, store and collect information, which ultimately forms big data. The use of big data analytics has grown tremendously in just the past few years. At the same time, the IoT has entered the public consciousness, sparking people's imaginations as to what a fully connected world can offer. Further, the book discusses the analysis of real-time big data to derive actionable intelligence in enterprise applications in several domains, such as in industry and agriculture. It explores possible automated solutions in daily life, including structures for smart cities and automated home systems based on IoT technology, as well as health care systems that manage large amounts of data (big data) to improve clinical decisions. The book addresses the security and privacy of the IoT and big data technologies, while also revealing the impact of IoT technologies on several scenarios in smart cities design. Intended as a comprehensive introduction, it offers in-depth analysis and provides scientists, engineers and professionals the latest techniques, frameworks and strategies used in IoT and big data technologies.
Vorwort In der Natur entwickelten sich die Echtzeitsysteme seit einigen 100 Mil- Honen Jahren. Tierische Nervensysteme haben zur Aufgabe, auf die Nachrichten aus der Umwelt die Steuerungsbefehle an die aktiven Or- gane zu geben. Dabei spielen zum Beispiel bedingte Reflexe eine wichtige Rolle. Vielleicht kann man die Entstehung des Menschen etwa zu der Zeit ansetzen, als sein sich allmahlich entwickelndes Gehirn Gedanken entwickelte, deren Bedeutung in vorausplanender Weise iiber die gerade vorliegende Situation hinausging. Das fiihrte schliesslich unter anderem zum heutigen Wissenschaftler, der seine Theorien und Systeme aufgrund langwieriger Uberlegungen aufbaut. Die Entwicklung der Computer ging im wesentlichen den umgekehrten Weg. Zunachst diente sie nur der Durchfiihrung "starrer" Programme, wie z.B. das erste programmgesteuerte Rechengerat Z3, das der Unterzeichner im Jahre 1941 vorfiihren konnte. Es folgte unter an- derem ein Spezialgerat zur Fliigelvermessung, das man als den ersten Prozessrechner bezeichnen kann. Es wurden etwa vierzig als Analog- Digital-Wandler arbeitende Messuhren yom Rechnerautomaten abgele- sen und im Rahmen eines Programms als Variable verarbeitet. Abel' auch das erfolgte noch in starrer Reihenfolge. Die echte Prozesssteuerung - heute auch Echtzeitsysteme genannt - erfordert aber ein Reagieren auf bestandig wechselnde Situationen.
This book presents 3D3C platforms - three-dimensional systems for community, creation and commerce. It discusses tools including bots in social networks, team creativity, privacy, and virtual currencies & micropayments as well as their applications in areas like healthcare, energy, collaboration, and art. More than 20 authors from 10 countries share their experiences, research fi ndings and perspectives, off ering a comprehensive resource on the emerging fi eld of 3D3C worlds. The book is designed for both the novice and the expert as a way to unleash the emerging opportunities in 3D3C worlds. This Handbook maps with breadth and insight the exciting frontier of building virtual worlds with digital technologies. David Perkins, Research Professor, Harvard Graduate School of Education This book is from one of the most adventurous and energetic persons I have ever met. Yesha takes us into new undiscovered spaces and provides insight into phenomena of social interaction and immersive experiences that transform our lives. Cees de Bont, Dean of School of Design & Chair Professor of Design, School of Design of the Hong Kong Polytechnic University When you read 3D3C Platforms you realize what a domain like ours -- 3D printing -- can and should do for the world. Clearly we are just starting. Inspiring.David Reis, CEO, Stratasys Ltd This book provides a stunning overview regarding how virtual worlds are reshaping possibilities for identity and community. Th e range of topics addressed by the authors- from privacy and taxation to fashion and health care-provide a powerful roadmap for addressing the emerging potential of these online environments. Tom Boellstorff , Professor, Department of Anthropology, University of California, Irvine Handbook on 3D3C Platforms amassed a unique collection of multidisciplinary academic thinking. A primer on innovations that will touch every aspect of the human community in the 21st century. Eli Talmor, Professor, London Business School
This volume explores from a legal perspective, how blockchain works. Perhaps more than ever before, this new technology requires us to take a multidisciplinary approach. The contributing authors, which include distinguished academics, public officials from important national authorities, and market operators, discuss and demonstrate how this technology can be a driver of innovation and yield positive effects in our societies, legal systems and economic/financial system. In particular, they present critical analyses of the potential benefits and legal risks of distributed ledger technology, while also assessing the opportunities offered by blockchain, and possible modes of regulating it. Accordingly, the discussions chiefly focus on the law and governance of blockchain, and thus on the paradigm shift that this technology can bring about.
The textbook covers the main aspects of Edge Computing, from a thorough look at the technology to the standards and industry associations working in the field. The book is conceived as a textbook for graduate students but also functions as a working guide for developers, engineers, and researchers. The book aims not only at providing a comprehensive technology and standard reference overview for students, but also useful research insights and practical exercises for edge software developers and investigators in the area (and for students looking to apply their skills). A particular emphasis is given Multi-access Edge Computing (MEC) as defined in European Telecommunications Standards Institute (ETSI), in relationship with other standard organizations like 3GPP, thus in alignment with the recent industry efforts to produce harmonized standards for edge computing leveraging both ETSI ISG MEC and 3GPP specifications. Practical examples of Edge Computing implementation from industry groups, associations, companies and edge developers, complete the book and make it useful for students entering the field. The book includes exercises, examples, and quizzes throughout.
This book, the first volume, highlights 8 out of a total of about 36 megacities in the World which by definition have 10 million inhabitants. The cities/chapters presented in this book are based on recent advance such as the wide use of ICT, IOT, e-Governance, e-Democracy, smart economy and flattening and acceleration of the world that is taking place in recent times as reported by 3 times Pulitzer Prize Winner Thomas Friedman. It therefor departs from other ideologies where only a certain megacity qualifies for the title of smart global megacities while in reality every megacity can, and presents how smart global megacities can be created.
This book presents machine learning models and algorithms to address big data classification problems. Existing machine learning techniques like the decision tree (a hierarchical approach), random forest (an ensemble hierarchical approach), and deep learning (a layered approach) are highly suitable for the system that can handle such problems. This book helps readers, especially students and newcomers to the field of big data and machine learning, to gain a quick understanding of the techniques and technologies; therefore, the theory, examples, and programs (Matlab and R) presented in this book have been simplified, hardcoded, repeated, or spaced for improvements. They provide vehicles to test and understand the complicated concepts of various topics in the field. It is expected that the readers adopt these programs to experiment with the examples, and then modify or write their own programs toward advancing their knowledge for solving more complex and challenging problems. The presentation format of this book focuses on simplicity, readability, and dependability so that both undergraduate and graduate students as well as new researchers, developers, and practitioners in this field can easily trust and grasp the concepts, and learn them effectively. It has been written to reduce the mathematical complexity and help the vast majority of readers to understand the topics and get interested in the field. This book consists of four parts, with the total of 14 chapters. The first part mainly focuses on the topics that are needed to help analyze and understand data and big data. The second part covers the topics that can explain the systems required for processing big data. The third part presents the topics required to understand and select machine learning techniques to classify big data. Finally, the fourth part concentrates on the topics that explain the scaling-up machine learning, an important solution for modern big data problems.
This book negotiates the hyper dimensions of the Internet through stories from myriads of Web sites, with its fluent presentation and simple but chronological organization of topics highlighting numerous opportunities and providing a solid starting point not only for inexperienced entrepreneurs and managers but anyone interested in applying information technology in business through real or virtual enterprise networks to date. "A Manager's Primer on e-Networking" is an easy to follow primer on modern enterprise networking that every manager needs to read.
This edited book first consolidates the results of the EU-funded EDISON project (Education for Data Intensive Science to Open New science frontiers), which developed training material and information to assist educators, trainers, employers, and research infrastructure managers in identifying, recruiting and inspiring the data science professionals of the future. It then deepens the presentation of the information and knowledge gained to allow for easier assimilation by the reader. The contributed chapters are presented in sequence, each chapter picking up from the end point of the previous one. After the initial book and project overview, the chapters present the relevant data science competencies and body of knowledge, the model curriculum required to teach the required foundations, profiles of professionals in this domain, and use cases and applications. The text is supported with appendices on related process models. The book can be used to develop new courses in data science, evaluate existing modules and courses, draft job descriptions, and plan and design efficient data-intensive research teams across scientific disciplines.
Information is a key factor in business today, and data warehousing has become a major activity in the development and management of information systems to support the proper flow of information. Unfortunately, the majority of information systems are based on structured information stored in organizational databases, which means that the company is isolated from the business environment by concentrating on their internal data sources only. It is therefore vital that organizations take advantage of external business information, which can be retrieved from Internet services and mechanically organized within the existing information structures. Such a continuously extending integrated collection of documents and data could facilitate decision-making processes in the organization. Filtering the Web to Feed Data Warehouses discusses areas such as:- how to use data warehouse for filtering Web content- how to retrieve relevant information from diverse sources on the Web - how to handle the time aspect - how to mechanically establish links among data warehouse structures and documents filtered from external sources - how to use collected information to increase corporate knowledge and gives a comprehensive example, illustrating the idea of supplying data warehouses with relevant information filtered from the Web.
This book constitutes the refereed proceedings of the 11th IFIP WG 5.5/SOCOLNET Advanced Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2020, held in Costa de Caparica, Portugal, in July 2020. The 20 full papers and 24 short papers presented were carefully reviewed and selected from 91 submissions. The papers present selected results produced in engineering doctoral programs and focus on technological innovation for industry and service systems. Research results and ongoing work are presented, illustrated and discussed in the following areas: collaborative networks; decisions systems; analysis and synthesis algorithms; communication systems; optimization systems; digital twins and smart manufacturing; power systems; energy control; power transportation; biomedical analysis and diagnosis; and instrumentation in health.
Big data technologies are used to achieve any type of analytics in a fast and predictable way, thus enabling better human and machine level decision making. Principles of distributed computing are the keys to big data technologies and analytics. The mechanisms related to data storage, data access, data transfer, visualization and predictive modeling using distributed processing in multiple low cost machines are the key considerations that make big data analytics possible within stipulated cost and time practical for consumption by human and machines. However, the current literature available in big data analytics needs a holistic perspective to highlight the relation between big data analytics and distributed processing for ease of understanding and practitioner use. This book fills the literature gap by addressing key aspects of distributed processing in big data analytics. The chapters tackle the essential concepts and patterns of distributed computing widely used in big data analytics. This book discusses also covers the main technologies which support distributed processing. Finally, this book provides insight into applications of big data analytics, highlighting how principles of distributed computing are used in those situations. Practitioners and researchers alike will find this book a valuable tool for their work, helping them to select the appropriate technologies, while understanding the inherent strengths and drawbacks of those technologies.
Expert Bytes: Computer Expertise in Forensic Documents - Players, Needs, Resources and Pitfalls -introduces computer scientists and forensic document examiners to the computer expertise of forensic documents and assists them with the design of research projects in this interdisciplinary field. This is not a textbook on how to perform the actual forensic document expertise or program expertise software, but a project design guide, an anthropological inquiry, and a technology, market, and policies review. After reading this book you will have deepened your knowledge on: What computational expertise of forensic documents is What has been done in the field so far and what the future looks like What the expertise is worth, what its public image is, and how to improve both Who is doing what in the field, where, and for how much How the expertise software functions The primary target readers are computer scientists and forensic document examiners, at the student and professional level. Paleographers, historians of science and technology, and scientific policy makers can also profit from the book. Concise and practical, featuring an attractive and functional layout design, the book is supplemented with graphical data representations, statistics, resource lists, and extensive references to facilitate further study.
This book is focused on the development of a data integration framework for retrieval of biodiversity information from heterogeneous and distributed data sources. The data integration system proposed in this book links remote databases in a networked environment, supports heterogeneous databases and data formats, links databases hosted on multiple platforms, and provides data security for database owners by allowing them to keep and maintain their own data and to choose information to be shared and linked. The book is a useful guide for researchers, practitioners, and graduate-level students interested in learning state-of-the-art development for data integration in biodiversity. |
You may like...
Studies in Natural Products Chemistry…
Atta-ur Rahman
Hardcover
Enzymatic Plastic Degradation, Volume…
Gert Weber, Uwe T. Bornscheuer, …
Hardcover
R4,322
Discovery Miles 43 220
|