Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > General
Volume 2 applies the linear algebra concepts presented in Volume 1 to optimization problems which frequently occur throughout machine learning. This book blends theory with practice by not only carefully discussing the mathematical under pinnings of each optimization technique but by applying these techniques to linear programming, support vector machines (SVM), principal component analysis (PCA), and ridge regression. Volume 2 begins by discussing preliminary concepts of optimization theory such as metric spaces, derivatives, and the Lagrange multiplier technique for finding extrema of real valued functions. The focus then shifts to the special case of optimizing a linear function over a region determined by affine constraints, namely linear programming. Highlights include careful derivations and applications of the simplex algorithm, the dual-simplex algorithm, and the primal-dual algorithm. The theoretical heart of this book is the mathematically rigorous presentation of various nonlinear optimization methods, including but not limited to gradient decent, the Karush-Kuhn-Tucker (KKT) conditions, Lagrangian duality, alternating direction method of multipliers (ADMM), and the kernel method. These methods are carefully applied to hard margin SVM, soft margin SVM, kernel PCA, ridge regression, lasso regression, and elastic-net regression. Matlab programs implementing these methods are included.
Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them. In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information. Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust.
Nonnegative matrix factorization (NMF) in its modern form has become a standard tool in the analysis of high-dimensional data sets. This book provides a comprehensive and up-to-date account of the most important aspects of the NMF problem and is the first to detail its theoretical aspects, including geometric interpretation, nonnegative rank, complexity, and uniqueness. It explains why understanding these theoretical insights is key to using this computational tool effectively and meaningfully. Nonnegative Matrix Factorization is accessible to a wide audience and is ideal for anyone interested in the workings of NMF. It discusses some new results on the nonnegative rank and the identifiability of NMF and makes available MATLAB codes for readers to run the numerical examples presented in the book. Graduate students starting to work on NMF and researchers interested in better understanding the NMF problem and how they can use it will find this book useful. It can be used in advanced undergraduate and graduate-level courses on numerical linear algebra and on advanced topics in numerical linear algebra and requires only a basic knowledge of linear algebra and optimization.
Keeping pace with the rapidly shifting environment for all information services workers, in this book provides readers with the knowledge and tools needed to manage the ebb and flow of reference services in today's libraries. From the ongoing flood of misinformation to the swift changes occasioned by the pandemic, a myriad of factors is spurring our profession to rethink reference services. Luckily, this classic text is back in a newly overhauled edition that thoughtfully addresses the evolving reference landscape. Designed to complement every introductory library reference course, Cassell and Hiremath's book also serves as the perfect resource to guide current practitioners in their day-to-day work. It teaches failsafe methods for identifying important materials by matching specific types of questions to the best available sources, regardless of format. Guided by a national advisory board of educators and experts, this thoroughly updated text presents chapters covering fundamental concepts, major reference sources, and special topics while also offering fresh insights on timely issues, including a basic template for the skills required and expectations demanded of the reference librarian the pandemic's effect on reference services and how the ingenuity employed by libraries in providing remote and virtual reference is here to stay a new chapter dedicated to health information, with a special focus on health equity and information sources selecting and evaluating reference materials, with strategies for keeping up to date a heightened emphasis on techniques for evaluating sources for misinformation and ways to give library users the tools to discern facts vs. ""fake facts"" reference as programming, readers' advisory services, developmentally appropriate material for children and young adults, and information literacy evidence-based guidance on handling microaggressions in reference interactions, featuring discussions of cultural humility and competence alongside recommended resources on implicit bias managing, assessing, and improving reference services the future of information and reference services, encapsulating existing models, materials, and services to project possible evolutions in the dynamic world of reference.
Pandas has rapidly become one of Python's most popular data analysis libraries. With pandas you can efficiently sort, analyze, filter and munge almost any type of data. In Pandas in Action, a friendly and example-rich introduction, author Boris Paskhaver shows you how to master this versatile tool and take the next steps in your data science career. about the technologyAnyone who's used spreadsheet software will find pandas familiar. While its column-based grids might remind you of Excel or Google Sheets, pandas is more flexible and far more powerful. It can efficiently perform operations on millions of rows and be used in tandem with other Python libraries for statistics, machine learning, and more. And best of all, using pandas doesn't mean sacrificing user productivity or needing to write tons of complex code. It's clean, intuitive, and fast. about the book Pandas in Action makes it easy to dive into Python-based data analysis. You'll learn to use pandas to automate repetitive spreadsheet functionality and derive insight from data by sorting columns, filtering data subsets, and creating multi-leveled indices. Each chapter is a self-contained tutorial, letting you dip in when you need to troubleshoot tricky problems. Best of all, you won't be learning from sterile or randomly created data. You'll start with a variety of datasets that are big, small, incomplete, broken, and messy and learn how to clean and format them for proper analysis. what's inside Import a CSV, identify issues with its data structures, and convert it to the proper format Sort, filter, pivot, and draw conclusions from a dataset and its subsets Identify trends from text-based and time-based data Organize, group, merge, and join separate datasets Real-world datasets that are easy to download and explore about the readerFor readers experienced with spreadsheet software who know the basics of Python. about the author Boris Paskhaver is a software engineer, Agile consultant, and educator. His six programming courses on Udemy have amassed 236,000 students, with an average course rating of 4.59 out of 5. He first used Python and the pandas library to derive a variety of business insights at the world's #1 jobs site, Indeed.com.
This textbook shows how to develop the functional requirements of (information) systems. It emphasizes the importance to consider the complete development path of a functional requirement, i.e. not only the individual development steps but also their proper combination and their alignment. The book consists of two parts: Part I presents the underlying theory while Part II contains various illustrative case studies. Part I starts with an introduction to the topic (Chapter 1). Then it explains how to develop functional requirements that represent the conceptual dynamics of an information system (Chapters 2 and 3). Chapters 4 and 5 explain how to model the conceptual statics of an information system. Chapter 6 gives some directions for implementation. Finally, Chapter 7 explains how a â€technical manager’ can organize and manage the development process. As an illustration of the theory, Part II contains three substantial case studies. The first one (Chapter 8) presents a stepwise development starting from an informal situation sketch via a simple domain model towards a precisely specified, full-fledged conceptual data model, which finally is translated to an SQL database. In the second case study (Chapter 9) the author converts the well-known non-trivial use case Process Sale from Larman into a textual System Sequence Description (SSD). For validation purposes, that textual SSD is subsequently translated into natural language and into a graphical SSD. The third case study (Chapter 10) shows the applicability of the author’s approach to a control system and also illustrates the typical situation that the requirements are constantly changing during development. This book is written for (under)graduate students in software engineering or information systems who want to learn how to carry out adequate problem analysis, to make good system specifications, and/or to understand how to organize and manage an IS-development process. It also targets practitioners who want to improve their problem analysis abilities and/or their ability to make good system specifications. To this end, it includes more than 150 explanatory figures and is accompanied by a Web site which provides additional course material such as slides, additional exercises, solutions to exercises, and the code for the figures used in the book.
Building a successful product usually involves teams of people, and many choose the Scrum approach to aid in creating products that deliver the highest possible value. Implementing Scrum gives teams a collection of powerful ideas they can assemble to fit their needs and meet their goals. The ninety-four patterns contained within are elaborated nuggets of insight into Scrumâ (TM)s building blocks, how they work, and how to use them. They offer novices a roadmap for starting from scratch, yet they help intermediate practitioners fine-tune or fortify their Scrum implementations. Experienced practitioners can use the patterns and supporting explanations to get a better understanding of how the parts of Scrum complement each other to solve common problems in product development. The patterns are written in the well-known Alexandrian form, whose roots in architecture and design have enjoyed broad application in the software world. The form organizes each pattern so you can navigate directly to organizational design tradeoffs or jump to the solution or rationale that makes the solution work. The patterns flow together naturally through the context sections at their beginning and end. Learn everything you need to know to master and implement Scrum one step at a time'the agile way.
This book constitutes the refereed proceedings of the 16th Scandinavian Conference on Image Analysis, SCIA 2011, held in Ystad, Sweden, in May 2011. The 74 revised full papers presented were carefully reviewed and selected from 140 submissions. The papers are organized in topical sections on multiple view geometry; segmentation; image analysis; categorization and classification; structure from motion and SLAM; medical and biomedical applications; 3D shape; medical imaging.
Advocates a cybersecurity “social contract” between government and business in seven key economic sectors Cybersecurity vulnerabilities in the United States are extensive, affecting everything from national security and democratic elections to critical infrastructure and economy. In the past decade, the number of cyberattacks against American targets has increased exponentially, and their impact has been more costly than ever before. A successful cyber-defense can only be mounted with the cooperation of both the government and the private sector, and only when individual corporate leaders integrate cybersecurity strategy throughout their organizations. A collaborative effort of the Board of Directors of the Internet Security Alliance, Fixing American Cybersecurity is divided into two parts. Part One analyzes why the US approach to cybersecurity has been inadequate and ineffective for decades and shows how it must be transformed to counter the heightened systemic risks that the nation faces today. Part Two explains in detail the cybersecurity strategies that should be pursued by each major sector of the American economy: health, defense, financial services, utilities and energy, retail, telecommunications, and information technology. Fixing American Cybersecurity will benefit industry leaders, policymakers, and business students. This book is essential reading to prepare for the future of American cybersecurity.
Achieve awesome user experiences and performance with simple, maintainable code! Embrace the full stack of web development, from styling with Bootstrap, building an interactive user interface with Angular 4, to storing data quickly and reliably in PostgreSQL. With this fully revised new edition, take a holistic view of full-stack development to create usable, high-performing applications with Rails 5.1. Rails is a great tool for building web applications, but it's not the best at everything. Embrace the features built into your database. Learn how to use front-end frameworks. Seize the power of the application stack through Angular 4, Bootstrap, and PostgreSQL. When used together, these powerful and easy-to-use tools will open you to a new world of possibilities. This second edition is updated to cover Angular - a completely reworked front-end framework - and dives into new Postgres 9.6 features such as UPSERT. Also new is Webpack coverage, to develop the front-end code for your Rails application. Create a usable and attractive login form using Bootstrap's styles, while ensuring the database table backing it is secure using Postgres' check constraints. See how creating an advanced Postgres index for a case-insensitive search speeds up your back end - enabling you to create a dynamic user experience using Angular 4. Create reusable components that bring Bootstrap and Angular together and effectively use materialized views for caching within Postgres. Get your front end working with Webpack, use Postgres' features from migrations, and write unit tests for all of it. All of this within Rails 5.1. You'll gain the confidence to work at every level of the application stack, bringing the right solution to every problem. What You Need: This book covers Postgres 9.5, Rails 5, and Ruby 2.3. You should have some experience with basic Rails concepts and a cursory understanding of JavaScript, CSS, and SQL, but by no means need to be an expert. You'll learn how to install Postgres on your computer or use a free version of it in the cloud.
Acta Numerica is an annual publication containing invited survey papers by leading researchers in numerical mathematics and scientific computing. The papers present overviews of recent developments in their area and provide state-of-the-art techniques and analysis.
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2010. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book makes it possible to compare the performance levels and usability of various architectures. As HLRS operates the largest NEC SX-8 vector system in the world, this book gives an excellent insight into the potential of vector systems, covering the main methods in high performance computing. Its outstanding results in achieving the highest performance for production codes are of particular interest for both scientists and engineers. The book includes a wealth of color illustrations and tables.
The Future of Enriched, Linked, Open and Filtered Metadata is a comprehensive and accessible guide to creating accurate, consistent, complete, user-centred and quality metadata that supports the user tasks of finding, identifying, selecting, obtaining and exploring information resources. Based on the author’s many years of academic research and work as a cataloguing and metadata librarian, it shows readers how they can configure, create, enhance and enrich their metadata for print and digital resources. The book applies examples using MARC21, RDA, FRBR, BIBFRAME, subject headings and name authorities. It also uses screenshots from cutting edge library management systems, discovery interfaces and metadata tools. Coverage includes: definitions, discussions, and comparisons among MARC, FRBR, LRM, RDA, Linked Data and BIBFRAME standards and models discussion of the underlying principles and protocols of Linked Data vis-à -vis library metadata practical metadata configuration, creation, management, and cases employing cutting edge LMS, discovery interfaces, formats and tools discussion around why metadata needs to be enriched, linked, open and filtered to ensure the information resources described are discoverable and user friendly consideration of metadata as a growing and continuously enhancing, customer-focused and user-driven practice where the aim is to support users to find and retrieve relevant resources for their research and learning. This practical book uses simple and accessible language to make sense of the many existing and emerging metadata standards, models and approaches. It will be a valuable resource for anyone involved in metadata creation, management and utilisation as well as a reference for LIS students, especially those undertaking information organisation, cataloguing and metadata modules.
Addiction, anxiety, depression, loneliness, low self-esteem, empathy development, troubled relationships, fake news, propaganda and even threats to democracy are just some of the challenges new technology presents. Antitrust law has failed to prevent the emergence of a few dominant big tech platforms and regulation has not kept pace with surveillance capitalism. The internet was created on the assumption that all users are equal, but children and the vulnerable are not. In Born Digital, Robert Wigley distils the mountains of available research on the subject and brings to bear his wealth of institutional experience to present a roadmap for society to radically and urgently reset its relationship with technology - for the sake of future generations.
YouTube's most successful purveyor of computer nostalgia brings those stories to print. This book celebrates the most exciting period in the history of technology - the arrival of the home computer and home gaming console. For a time, an exciting and ever-changing array of different companies fought for supremacy, leaving a lasting legacy of great gameplay and surreal design we'll never experience again. Features screenshots of nostalgic games that will bring joy to the heart of anyone who grew up in the 80s or early 90s, alongside stunning studio photography of the computers that imprinted themselves on a generation's minds
Get started with Apache Flink, the open source framework that powers some of the world’s largest stream processing applications. With this practical book, you’ll explore the fundamental concepts of parallel stream processing and discover how this technology differs from traditional batch data processing. Longtime Apache Flink committers Fabian Hueske and Vasia Kalavri show you how to implement scalable streaming applications with Flink’s DataStream API and continuously run and maintain these applications in operational environments. Stream processing is ideal for many use cases, including low-latency ETL, streaming analytics, and real-time dashboards as well as fraud detection, anomaly detection, and alerting. You can process continuous data of any kind, including user interactions, financial transactions, and IoT data, as soon as you generate them. Learn concepts and challenges of distributed stateful stream processing Explore Flink’s system architecture, including its event-time processing mode and fault-tolerance model Understand the fundamentals and building blocks of the DataStream API, including its time-based and statefuloperators Read data from and write data to external systems with exactly-once consistency Deploy and configure Flink clusters Operate continuously running streaming applications
Whether you want to automate tasks, analyze data, parse logs, talk to network services, or address other systems requirements, writing your own command-line tool may be the fastest - and perhaps the most fun - way to do it. The Go programming language is a great choice for developing tools that are fast, reliable, and cross-platform. Create command-line tools that work with files, connect to services, and even manage external processes, all while using tests and benchmarks to ensure your programs are fast and correct. When you want to develop cross platform command-line tools that are fast and reliable, use Go, a modern programming language that combines the reliability of compiled languages with the ease of use and flexibility of dynamic typed languages. Work through practical examples to develop elegant and efficient tools by applying Go's rich standard library, its built in support for concurrency, and its expressive syntax. Use Go's integrated testing capabilities to automatically test your tools, ensuring they work reliably even across code refactoring. Develop CLI tools that interact with your users by using common input/output patterns, including environment variables and flags. Handle files to read or persist data, and manipulate paths consistently in cross-platform scenarios. Control processes and handle signals, and use a benchmark driven approach and Go's concurrency primitives to create tools that perform well. Use powerful external libraries such as Cobra to create modern and flexible tools that handle subcommands, and develop tools that interact with databases, APIs, and network services. Finally, leverage what you learned by tackling additional challenges at the end of each chapter. What You Need: Go 1.8 or higher, an internet connection to download the example files and additional libraries, and a text editor to write your programs.
If programming is magic then web scraping is surely a form of wizardry. By writing a simple automated program, you can query web servers, request data, and parse it to extract the information you need. The expanded edition of this practical book not only introduces you web scraping, but also serves as a comprehensive guide to scraping almost every type of data from the modern web. Part I focuses on web scraping mechanics: using Python to request information from a web server, performing basic handling of the server’s response, and interacting with sites in an automated fashion. Part II explores a variety of more specific tools and applications to fit any web scraping scenario you’re likely to encounter. Parse complicated HTML pages Develop crawlers with the Scrapy framework Learn methods to store data you scrape Read and extract data from documents Clean and normalize badly formatted data Read and write natural languages Crawl through forms and logins Scrape JavaScript and crawl through APIs Use and write image-to-text software Avoid scraping traps and bot blockers Use scrapers to test your website
Academic and practitioner journals in fields from electronics to
business to language studies, as well as the popular press, have
for over a decade been proclaiming the arrival of the "computer
revolution" and making far-reaching claims about the impact of
computers on modern western culture. Implicit in many arguments
about the revolutionary power of computers is the assumption that
communication, language, and words are intimately tied to culture
-- that the computer's transformation of communication means a
transformation, a revolutionizing, of culture.
What does it take to be the leader of a design firm or group? We often assume they have all the answers, but in this rapidly evolving industry they're forced to find their way like the rest of us. So how do good design leaders manage? If you lead a design group, or want to understand the people who do, this insightful book explores behind-the-scenes strategies and tactics from leaders of top design companies throughout North America. Based on scores of interviews he conducted over a two-year period-from small companies to massive corporations like ESPN-author Richard Banfield covers a wide range of topics, including: How design leaders create a healthy company culture Innovative ways for attracting and nurturing talent Creating productive workspaces, and handling remote employees Staying on top of demands while making time for themselves Consistent patterns among vastly different leadership styles Techniques and approaches for keeping the work pipeline full Making strategic and tactical plans for the future Mistakes that design leaders made-and how they bounced back
The Benchmark Series is designed for students to develop a mastery skill level in Microsoft Word, Excel, Access, and PowerPoint. Its graduated, three-level instructional approach moves students to analyse, synthesise, and evaluate information. Multi-part, projects-based exercises build skill mastery with activities that require independent problem solving, which challenge students to execute strategies they will encounter in today's workplace. Complete course content is delivered in the Cirrus learning environment through a series of scheduled assignments that report to a grade book to track student progress and achievements. |
You may like...
Handbook on the Politics and Governance…
Andrej Zwitter, Oskar J. Gstrein
Hardcover
R5,654
Discovery Miles 56 540
Password Logbook (Hip Floral) - Keep…
Editors of Rock Point
Hardcover
Algorithms, Collusion and Competition…
Steven Van Uytsel, Salil K. Mehra, …
Hardcover
R3,237
Discovery Miles 32 370
|